
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat
Here is my episode with Demis Hassabis, CEO of Google DeepMind
We discuss:
* Why scaling is an artform
* Adding search, planning, & AlphaZero type training atop LLMs
* Making sure rogue nations can't steal weights
* The right way to align superhuman AIs and do an intelligence explosion
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Timestamps
(0:00:00) - Nature of intelligence
(0:05:56) - RL atop LLMs
(0:16:31) - Scaling and alignment
(0:24:13) - Timelines and intelligence explosion
(0:28:42) - Gemini training
(0:35:30) - Governance of superhuman AIs
(0:40:42) - Safety, open source, and security of weights
(0:47:00) - Multimodal and further progress
(0:54:18) - Inside Google DeepMind
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Listen and Follow Along
Full Transcript
I wouldn't be surprised if we had HGI-like systems within the next decade.
It was pretty surprising to almost everyone, including the people who first worked on the scaling hypothesis, that how far it's gone. In a way, I look at the large models today and I think they're almost unreasonably effective for what they are.
It's an empirical question whether that will hit an asymptote or a brick wall. I think no one knows.
When you think about superhuman intelligence, is it still controlled by a private private company? As Gemini are becoming more multimodal and we start ingesting audiovisual data as well as text data, I do think our systems are going to start to understand the physics of the real world better. The world's about to become very exciting, I think, in the next few years as we start getting used to the idea of what true multimodality means.
Okay, today it is a true honor to speak with Demis Asavas, who is the CEO of DeepMind. Demis, welcome to the podcast.
Thanks for having me. First question, given your neuroscience background, how do you think about intelligence? Specifically, do you think it's like one higher level general reasoning circuit, or do you think it's thousands of independent subskills and heuristics? Well, it's interesting because intelligence is so broad and, you know, what we use it for is so sort of generally applicable.
I think that suggests that, you know, there must be some sort of high level common things in, you know, common kind of algorithmic themes, I think, around how the brain processes the world around us. So, of course, then there are specialized parts of the brain that do specific things.
But I think there are probably some underlying principles that underpin all of that. Yeah.
How do you make sense of the fact that in these LLMs, though, when you give them a lot of data in any specific domain, they tend to get asymmetrically better in that domain. Wouldn't we expect a sort of like general improvement across all the different areas? Well, I think you, first of all, I think you do actually sometimes get surprising improvement in other domains when you improve in a specific domain.
So, for example, when these large models sort of improve at coding, that can actually improve their general reasoning. So there is some evidence of some transfer, although I think we would like a lot more evidence of that.
But also, you know, that's how the human brain learns too, is if we experience and practice a lot of things like chess or, you know, writing, creative writing or whatever that is, we also tend to specialize and get better at that specific thing, even though we're using sort of general learning techniques and general learning systems in order to, you know, to get good at that domain. Yeah.
Well, what's been the most surprising example of this kind of transfer for you? Like if you see language and code or images and text, what's... Yeah, I think probably, I mean, I'm hoping we're going to see a lot more of this trying to transfer, but I think things like better at coding and math, then generally improving your reasoning.
That is how it works with us as human learners. But I think it's interesting seeing that in these artificial systems.
And can you see the sort of mechanistic way in which, let's say in the language and code example, there's like, I found the place in a neural network that's getting better with both the language and the code? Or is it that too far down the week? Yeah, well, I don't think our analysis techniques are quite sophisticated enough to be able to hone in on that. I think that's actually one of the areas that a lot more research needs to be done on kind of mechanistic analysis of the representations that these systems build up.
And, you know, I sometimes like to call it virtual brain analytics in a way. It's a bit like doing fMRI or single cell recording from a real brain.
What's the analogous sort of analysis techniques for these artificial minds? And there's a lot of great work going on on this sort of stuff. People like Chris Ola, I really like his work.
And a lot of computational neuroscience techniques, I think, can be brought to bear on analyzing these current systems we're building. In fact, I try to encourage a lot of my computational neuroscience friends to start thinking in that direction and applying their know-how to the large models.
Yeah. What do other AI researchers not understand about human intelligence that you have some sort of insight on on given your neuroscience background? I think neuroscience has added a lot.
If you look at the last sort of 10, 20 years that we've been at it at least, and I've been thinking about this for 30 plus years, I think in the earlier days of this sort of new wave of AI, I think neuroscience was providing a lot of interesting directional clues. So things like reinforcement learning, combining that with deep learning, you know, some of our pioneering work we did there, things like experience replay, even the notion of attention, which has become super important.
A lot of those original sort of inspirations come from some understanding about how the brain works, not the exact specifics, of course, you know, one's an engineered system, the other one's a natural system. So it's not so much about a one to one mapping of a specific algorithm, it's more kind of inspirational direction, maybe some ideas for architecture or algorithmic ideas or representational ideas.
And because you know, you know, the brains in existence proof that general intelligence is possible at all. I think, you know, endeavors has been that once you know something's possible, it's easier to push hard in that direction because you know it's a question of effort then and sort of a question of when, not if.
And that allows you to, I think, make progress a lot more quickly. So I think neuroscience has had a lot of, has inspired a lot of the thinking, at least in a soft way, behind where we are today.
But as for going forwards, I think that there's still a lot of interesting things to be resolved around planning and how does the brain construct the right world models.
I studied, for example, how the brain does imagination, or you can think of it as mental simulation. So how do we create, you know, very rich visual spatial simulations of the world in order for us to plan better? Yeah, actually, I'm curious how you think that will sort of interface with LLM.
So obviously, DeepMind is at the frontier and has been for many years, you know, systems like AlphaZeroero and so forth, of having these agents who can, like, think through different steps to get to an end outcome. Will this just be, is a path for our alums to have this sort of tree search kind of thing on top of them? How do you think about this? I think that's a super promising direction, in my opinion.
So, you know, we've got to carry on improving the large models and we've got to carry on basically making them more and more accurate predictors of the world. So in effect, making them more and more reliable world models, that's clearly a necessary, but I would say probably not sufficient component of an AGI system.
And then on top of that, I would, you know, we're working on things like alpha zero, like planning mechanisms on top that make use of that model in order to make concrete plans to achieve certain goals in the world and perhaps sort of chain, you know, chain thought together or lines of reasoning together and maybe use search to kind of explore massive spaces of possibility. I think that's kind of missing from our current large models.
How do you get past the sort of immense amount of compute that these approaches tend to require? So even the off-a-go system was a pretty expensive system because you had to do this sort of run an LLM on each node of the tree. How do you anticipate that'll get made more efficient? Well, I mean, one thing is Moore's law tends to help if over every year, of course, more computation comes in.
But we focus a lot on efficient, you know, sample efficient methods and reusing existing data, things like experience replay. And also just looking at more efficient ways.
I mean, the better your world model is, the more efficient your search can be. So one example I always give with AlphaZero, our system to play Go and chess and any game, is that it's stronger than world champion level, human world champion level at all these games.
And it uses a lot less search than a brute force method like Deep Blue, say, to play chess. Deep Blue, one of these traditional stockfish or Deep Blue systems would maybe look at millions of possible moves for every decision it's going to make.
Alpha Zero and Alpha Go looked at around tens of thousands of possible positions in order to make a decision about what to move next. But a human grandmaster, a human world champion, probably only looks at a few hundreds of moves, even the top ones, in order to make their very good decision about what to play next.
So that suggests that obviously the brute force systems don't have any real model other than the heuristics about the game. Alpha Zero has quite a decent model, but the top human players have a much richer, much more accurate model than of Go or chess.
So that allows them to make world-class decisions on a very small amount of search.
So I think there's a sort of trade-off there.
If you improve the models, then I think your search can be more efficient, and therefore you can get further with your search.
Yeah. I have two questions based on that.
The first being, with Alpha's Go, you had a very concrete win condition of, at the end of the day, do I win this game of Go or not? And you questions based on that. The first being with Alpha's Go, you had a very concrete win condition of, you know, at the end of the day, do I win this game or not? And you can reinforce on that.
When you're just thinking of like an LLM putting out thought, what will do you think there will be this kind of ability to discriminate in the end, whether that was like a good, good thing to reward or not? Well, of course, that's why we pioneered and DeepMind's sort of famous for using games as a proving ground, partly because obviously it's efficient to research in that domain.
But the other reason is obviously it's extremely easy to specify a reward function, winning the game or improving the score or something like that sort of built into most games. So that is one of the challenges of real-world systems is how does one define the right objective function, the right reward function, and the right goals, and specify them in a general way, but they're specific enough and actually points the system in the right direction.
And for real world problems, that can be a lot harder. But actually, if you think about it, and even scientific problems, there are usually ways that you can specify the goal that you're after.
And then when you think about human intelligence, you're just saying, well, you know, the humans thinking about these thoughts are just super sample efficient. Einstein coming up with relativity, right? There's just like thousands of possible permutations of the equations.
Do you think it's also this sort of sense of like different heuristics of like, I'm going to try this approach instead of this? Or is it a totally different way of approaching coming up with that solution uh than you know what awful go does to plan the next move yeah well look i think it's different because there's our brains are not built for doing monte carlo tree search right um it's it's it's just not the way uh uh our organic brains would work um so i think that in order to compensate for that you know people like ein like Einstein have come up, you know, their brains have using their intuition.
And, you know, we maybe come to what intuition is, but they use their sort of knowledge and their experience to build extremely, you know, in Einstein's case, extremely accurate models of physics, including these sort of mental simulations. I think if you read about Einstein and how he came up with things, he used to visualize and sort of really kind of feel what these physical systems should be like, not just the mathematics of it, but have a really intuitive feel for what they would be like in reality.
And that allowed him to think these sort of very outlandish thoughts at the time. um so i think that it's it's it's the sophistication of the world models that we're building which then you know if you imagine your world model can get you to a certain node in a in a
tree that you're searching, and then you just do a little bit of search around that node, that leaf node, and that gets you to these original places. But obviously, if your model and your judgment on that model is very, very good, then you can pick which leaf nodes you should sort of expand with search much more accurately.
So therefore, overall, you do a lot less search. I mean, there's no way that, you know, any human could do a kind of brute force search over any kind of significant space.
Yeah, yeah, yeah. A big sort of open question right now is whether RL will allow these models to do the self-play synthetic data to get over the data bottleneck.
It sounds like you're optimistic about this. Yeah, I'm very optimistic about that.
I mean, I think, well, first of all, there's still a lot more data, I think, that can be used, especially if one views like multimodal and video and these kind of things. And obviously, you know, society's adding more data all the time.
But I think to the internet and things like that. But I think that there's a lot of scope for creating synthetic data.
We're looking at that in different ways, partly through simulation, using very realistic games environments, for example, to generate realistic data, but also self-play. So that's where systems interact with each other or converse with each other.
And in the sense of, you know, work very well for us with AlphaGo and AlphaZero, where we got the systems to play against each other and actually learn from each other's mistakes and build up a knowledge base that way. And I think there are some good analogies for that.
It's a little bit more complicated, but to build a general kind of world data. How do you get to the point where these models, the sort of synthetic data they're outputting, the software they're doing, is not just more of what they've already got in their data set, but is something they haven't seen before? You know what I mean? To actually improve the abilities.
Yeah. So there, I think there's a whole science needed, and I think we're still in the nascent stage of this, of data curation and data analysis.
So actually analyzing the holes that you have in your data distribution. And this is important for things like fairness and bias and other stuff to remove that from the system, is to try and really make sure that your data set is representative of the distribution you're trying to learn.
And, you know, there are many tricks there one can use, like overweighting or replaying certain parts of the data. Or you could imagine if you identify some gap in your data set, that's where you put your synthetic generation capabilities to work on.
Yeah. So, you know, nowadays people are paying attention to the RL stuff that DeepMind did many years before.
What are the sort of either early research directions or something that was done way back in the past, but people just haven't been paying attention to that you think will be a big deal, right? Like there's a time where people weren't paying attention to scaling. What's the thing now where it's like totally underrated? Well, actually, I think that, you know, there's the history of the sort of last couple of decades has been things coming in and out of fashion, right? And I do feel like a while ago and, you know, maybe five plus years ago when we were pioneering with AlphaGo and before that, DQN, where it was the first system, you know, that worked on Atari, our first big system really more than 10 years ago now, that scaled up Q learning and reinforcement learning techniques to combine that with deep learning to create deep reinforcement learning and then use that to scale up to complete some, master some pretty complex tasks like playing Atari games just from the pixels.
And I do actually think a lot of those ideas need to come back in again. And as we talked about earlier, combine it with the new advances in large models and large multimodal models, which is obviously very exciting as well.
So I do think there's a lot of potential for combining some of those older ideas together with the new ones. Is there any potential for something to come, the AGI to eventually come from just a pure RL approach? Like the way we're talking about it, it sounds like there'll be the LLM will form the gripe fryer and then this sort of research will go on top of that.
Or is there a possibility of just like completely out of the dark? I think I certainly, you know, theoretically, I think there's no reason why you couldn't go full alpha zero like on it. And there are some people here at Google DeepMind and in the RL community who work on that, right? Fully assuming no priors, no data, and just build all knowledge from scratch.
And I think that's valuable because, of course, those ideas and those algorithms should also work when you have some knowledge too. But having said that, I think by far, probably my betting would be the quickest way to get to AGI and the most likely plausible way is to use all the knowledge that's existing in the world right now on things like the web and that we've collected.
And we have these scalable algorithms like transformers that are capable of ingesting all of that information. And I don't see why you wouldn't start with a model as a kind of prior or to build on and to make predictions that helps bootstrap your learning.
I just think it doesn't make sense not to make use of that. So my betting would be is that, you know, the final AGI system will have these large multimodals models as part of the overall solution, but probably won't be enough on their own.
You will need this additional planning search on top. Okay.
This sounds like the answer to the question I'm about to ask, which is, as somebody who's been in this field for a long time and seen different trends come and go, what do you think that the strong version of the scaling hypothesis gets right and what does it get wrong? It's just the idea that you just throw enough compute at a wide enough distribution of data and you get intelligence. Yeah, look, my view is this is kind of an empirical question right now.
So I think it was pretty surprising to almost everyone, including the people, you know, who first worked on the scaling hypotheses that how far it's gone. In a way, I mean, I sort of look at the large models today, and I think they're almost unreasonably effective for what they are.
You know, I think it's pretty surprising some of the properties that emerge, things like, you know, it's clearly, in my opinion, got some form of concepts and abstractions and some things like that. And I think if we were talking five plus years ago, I would have said to you, maybe we need an additional algorithmic breakthrough in order to do that, like, you know, maybe more like the brain works.
And I think that's still true if we want explicit abstract concepts, neat concepts. But it seems that these systems can implicitly learn that.
Another really interesting, I think, unexpected thing was that these systems have some sort of grounding, even though they don't experience the world multimodally, or at least until more recently when we have the multimodal models. And that's surprising that the amount of information that can be and models that can be built up just from language.
And I think that I have some hypotheses about why that is. I think we get some grounding through the RLHF feedback systems because obviously the human raters are by definition grounded people.
We're grounded, right, in reality. So our feedback's also grounded.
So perhaps there's some grounding coming in through there. And also maybe language contains more grounding, you know, if you're able to ingest all of it, than we perhaps thought, or linguists perhaps thought before.
So it's actually some very interesting philosophical questions. I think people haven't even really scratched the surface of yet, looking at the advances that have been made.
You know. It's quite interesting to think about where it's going to go next.
But in terms of your question of large models, I think we've got to push scaling as hard as we can. And that's what we're doing here.
And it's an empirical question whether that will hit an asymptote or a brick wall. And there are different people that argue about that.
But actually, I think we should just test it. I think no one knows.
But in the meantime, we should also double down on innovation and invention. And this is something that Google Research and DeepMind and Google Brain have, we've pioneered many, many things over the last decade.
That's something that's our bread and butter. And you can think of half our effort as to do with scaling and half our efforts to do with inventing the next architectures, the next algorithms that will be needed, knowing that you've got this scaled, larger and larger model coming along the lines.
So my betting right now, but it's a loose betting, is that you would need both. But I think, you know, you've got to push both of them as hard as possible.
And we're in a lucky position that we can do that. Yeah, I want to ask more about the grounding.
So you can imagine two things that might change, which would make the grounding more difficult. One is that if these models get smarter, they're going to be able to operate in domains where we just can't generate enough human labels just because we're not smart enough, right? So if it does like a million line pull request, you know, how do we tell it like this is this is within the constraints of our morality and the end goal we wanted and this isn't.
And the other is it sounds like you're saying more of the compute. So far, we've been doing your next token prediction.
And in some sense, it's a guardrail because you're you have to talk as a human would talk and think as a human would think. Now, if additional compute is going to come in the form of reinforcement learning, we're just to get to the end objective, we can't really trace how you got there.
When you combine those two, how worried are you that the sort of grounding goes away? Well, look, I think if the grounding, you know, if it's not properly grounded, the system won't be able to achieve those goals properly, right? I think so. I think in a sense, you sort of have to have the grounding, or at least some of it in order for a system to actually achieve goals in the real world.
I do actually think that as these systems and things like Gemini are becoming more multimodal and we start ingesting things like video and audio visual data as well as text data, and then the system starts correlating those things together,
I think that is a form of proper grounding, actually. So I do think our systems are going to start to understand the physics of the real world better.
And then one could imagine the active version of that is being in a very realistic simulation or game environment where you're starting to learn about what your actions do in the world and how that affects the world itself, the world state itself, but also what next learning episode you're getting. So, you know, these RL agents we've always been working on and pioneered like AlphaZero and AlphaGo, they actually affect their active learners.
What they decide to do next affects what the next learning piece of data or experience they're going to get. So there's this very interesting sort of feedback loop.
And of course, if we ever want to be good at things like robotics, we're going to have to understand how to act in the real world. Yeah.
So there's a grounding in terms of will the capabilities be able to proceed or will they be like enough in touch with reality to be able to like do the things we want? And there's another sense of grounding of we've gotten lucky in the sense that since they're trained on human thought, they maybe think like a human. To what extent does that stay true when more of the compute for trading comes from, just did you get the right outcome and not guardrail by, are you proceeding on the next token as a human would? Maybe the broader question I'll pose to you is, and this is what I asked Shane as well, what would it take to align a system that's smarter than a human maybe things in alien concepts uh and you can't like really monitor the million line pull request because it's you can't really understand the whole thing yeah you can't give labels look is this something shane and i and many others here we've had that forefront of our minds for since before we started deep mind and um because we planned for success crazy you know 2010 no one was thinking about, let alone AGI.
But we already knew that if we could make progress
with these systems and these ideas,
the technology that would be created
would be unbelievably transformative.
So we already were thinking 20 years ago about,
well, what would the consequences of that be,
both positive and negative?
Of course, the positive direction is amazing science,
things like AlphaFold, incredible breakthroughs in health and science and maths and discovery, scientific discovery. But then also we've got to make sure these systems are sort of understandable and controllable.
And I think there's sort of several, you know, this would be a whole sort of discussion in itself. But there are many, many ideas that people have from much more stringent eval systems.
I think we don't have good enough evaluations and benchmarks for things like, can the system deceive you? Can it excretrate its own code? Sort of undesirable behaviors. And then there's ideas of actually using AI, maybe narrow AIs, so not general learning ones, but systems that are specialized for a domain to help us as the human scientists analyze and summarize
what the more general system is doing, right? So kind of narrow AI tools. I think that there's a lot of promise in creating hardened sandboxes or simulations so that are hardened with cybersecurity arrangements around the simulation, both to keep the AI in, but also as cybersecurity to keep hackers out.
And then you could experiment a lot more freely within that sandbox domain. And I think a lot of these ideas are, and there's many, many others, including the analysis stuff we talked about earlier, where can we analyze and understand what the concepts are that this system is building, what the representations are, so maybe they're not so alien to us and we can actually keep track of the kind of knowledge that it's building.
Yeah, yeah. Stepping back up a bit, I'm curious what your timelines are.
So Shane said his, I think, modal outcome is 2028. I think that was maybe his median.
Yeah. What is yours? Yeah, well, I don't have prescribed kind of specific numbers to it, because I think there's so many unknowns and uncertainties.
And human ingenuity and endeavor comes up with surprises all the time. So that could meaningfully move the timelines.
But I will say that when we started DeepMind back in 2010, we thought of it as a 20-year project. And actually, I think we're on track, which is kind of amazing for 20 year projects, because usually they're always 20 years away.
Right. So that's the joke about, you know, whatever it is, quantum AI, you know, take your pick.
And but I think we you know, I think we're on track. So I wouldn't be surprised if we had HGI like systems within the next decade.
And do you buy the model that once you have an HGI, you can have system that basically speeds up further AI research? Maybe not like an overnight sense, but over the course of months and years, you have much faster progress than you would have by the right side? I think that's potentially possible. I think it partly depends what we decide, we as society, decide to use the first nascent AGI systems or even proto-AGI systems for.
So even the current LLMs seem to be pretty good at coding. And we have systems like AlphaCode.
We also got theorem proving systems. So one could imagine combining these ideas together and making them a lot better.
And then I could imagine these systems being quite good at designing and helping us build future versions of themselves. But we also have to think about the safety implications of that, of course.
Yeah, I'm curious what you think about that. So, I mean, I'm not saying this is happening this year or anything, but eventually you'll be developing a model where during the process of development, you think, you know, there's some chance that once this is fully developed, it'll be capable of like an intelligence explosion like dynamic.
What would have to be true of that model at that point where you're like, I've seen these specific evals. I understand it's internal thinking enough and it's future thinking that I'm comfortable continuing development of the system.
Well, look, we need a lot more understanding of the systems than we do today before I would be even confident of even explaining to you what we would need to tick box there. So I think actually what we've got to do in the next few years and the time we have before those systems start arriving is come up with the right evaluations and metrics and maybe ideally formal proofs, but it's going to be hard for these types of systems, but at least empirical bounds around what these systems can do.
And that's why I think about things like deception as being quite root node traits that you don't want. Because if you're confident that your system is sort of exposing what it actually thinks, then you could potentially, that opens up possibilities of using the system itself to explain aspects of itself to you.
The way I think about that actually is like if I was to play a game of chess against Garry Kasparov,
which I played in the past, or Magnus Carlsen, the amazing chess players, graceful time,
I wouldn't be able to come up with a move that they could.
But they could explain to me why they came up with that move. And I could understand it post hoc.
right and and that's the sort of thing one could imagine uh
you explain to me why they came up with that move and I could understand it post hoc. Right.
And that's the sort of thing one could imagine. One of the capabilities that we could make use of these systems is for them to explain it to us and even maybe the proofs behind why they're thinking something.
Certainly in a mathematical, any mathematical problem. Got it.
Do you have a sense of what the converse answer would be? So what would have to be true where tomorrow morning you're like, oh, man, I didn't anticipate this. You see some specific observation tomorrow morning where like we got to stop Gemini 2 training.
Like what would specifically? Yeah, I could imagine that. Like, and this is where, you know, things like the sandbox simulations, I would hope we're experimenting in a safe, secure secure uh environment and then you know something happens in it where um very unexpected happens a new unexpected capability or something that we didn't want you know explicitly told the system we didn't want that it did but then lied about you know these are the kinds of things where one would want to then dig in carefully um you know not with the systems that that are around today, which are not dangerous, in my opinion, today, but in a few years, they might be have have potential.
And then you would sort of ideally kind of pause and then really get to the bottom of why it was doing those things before one continued. Yeah, going back to Gemini, I'm curious what the bottlenecks were in the development.
Like, why not make it immediately one order of magnitude bigger if scaling works? Well, look, first of all, there are practical limits. How much compute can you actually fit in one data center? And actually, you're bumping up against very interesting distributed computing kind of challenges, right?
Fortunately, we have some of the best people in the world on those challenges and cross data center training, all these kinds of things.
Very interesting challenges, hardware challenges,
and we have our TPUs and so on that we're building and designing all the time,
as well as using GPUs.
And so there's all of that.
And then you also have to, the scaling laws, they don't just work by magic. You still need to scale up the hyperparameters and various innovations are going in all the time with each new scale.
It's not just about repeating the same recipe. At each new scale, you have to adjust the recipe.
And that's a bit of an art form in a way. And you have to sort of almost get new data points.
If you try and extend your predictions, extrapolate them, say several orders of magnitude out, sometimes they don't hold anymore, right? Because new capabilities, they can be step functions in terms of new capabilities and some things hold and other things don't. So often you do need those intermediate data points actually to correct some of your hyperparameter optimization and other things so that the scaling law continues to be true.
So there's sort of various practical limitations onto that. So, you know, kind of one order of magnitude is about probably the maximum that you want to carry on, you want to sort of do between each era.
Oh, that's so fascinating. You know, in the GPT-4 technical report, they say that they were able to predict the training loss, you know, tens of thousands of times less compute than GPT-4, that they could see the curve.
But the point you're making is that the actual capabilities that loss implies may not be so clear. Yeah, the downstream capabilities sometimes don't follow from the, you can often predict the core metrics like training loss or something like that, but then it doesn't actually translate into MMLU or math or some other actual capability that you care about.
They're not necessarily linear all the time. So there's sort of non-linear effects there.
What was the biggest surprise to you during the development of Gemini? So something like this happening? Well, I mean, I wouldn't say there was one big surprise, but it was very interesting, you know, trying to train things at that size and learning about all sorts of things from organization or how to babysit such a system and to track it. And I think things like getting a better understanding of the metrics you're optimizing versus the final capabilities that you want.
I would say that's still not a perfectly understood mapping, but it's an interesting one that we're getting better and better at. Yeah, yeah.
There's a perception that maybe other labs are more compute efficient than DeepMind has been with Gemini. I don't know what you make of that perception.
I don't think that's the case. I mean, you know, I think that actually Gemini 1 used roughly the same amount of compute, maybe slightly more than what was rumored for GPT-4.
I don't know exactly what was used. So I think it was in the same ballpark.
I think we're very efficient with our compute and we use our compute for many things. One is not just the scaling, but going back to earlier to these more innovation and ideas, you've got to, you know, it's only useful, a new innovation, a new invention, if it also can scale.
So in a way, you also need quite a lot of compute to do new invention, because you've got to test many things at least some reasonable scale and make sure that they work at that scale. And also some new ideas may not work at a toy scale, but do work at a larger scale.
And in fact, those are the more valuable ones. So you actually, if you think about that exploration process, you need quite a lot of compute to be able to do that.
I mean, the good news is, is I think, you know, we, we're pretty lucky at Google that we, I think this year, certainly we're going to have the most compute by far of any sort of research lab. And, you know, we hope to make very efficient and good use of that in terms of both scaling and the capability of our systems and also new inventions.
Yeah. What's been the biggest surprise to you if you go back to yourself in 2010 when you're starting DeepMind in terms of what AI progresses look like? Did you anticipate back then that it would in some large sense amount to spend, you know, dumping billions of dollars into these models? Or did you have a different sense of what it would look like? We thought that.
And actually, you know, if you I know you've interviewed my colleague Shane and he always thought that. And in terms of like compute curves and then maybe comparing roughly to like the brain and how many neurons and synapses there are very loosely.
But we're actually interestingly in that kind of regime now, roughly in the right order of magnitude of, you know, number of synapses in the brain and the sort of compute that we have. But I think more fundamentally, we always thought that we bet on generality and learning, right? So those were always at the core of any technique we would use.
That's why we triangulated on reinforcement learning and search and deep learning as three types of algorithms that would scale and would be very general and not require a lot of handcrafted human priors, which we thought was the sort of failure mode really of the efforts to build AI in the 90s, right? Places like MIT, where there were very logic-based systems logic-based systems, expert systems, you know, masses of hand-coded, hand-crafted human information going into that turned out to be wrong or too rigid. So we wanted to move away from that.
I think we spotted that trend early and became, you know, and obviously we use games as our proving ground and we did very well with that. And I think all of that was very successful and I think it's maybe inspired others to,, you know, things like AlphaGo, I think was a big moment for inspiring many others to think, oh, actually, these systems are ready to scale.
And then, of course, with the advent of transformers invented by our colleagues at Google, you know, research and brain, that was the then, you know, the type of deep learning that allowed us to ingest masses of amounts of information. And that, of course, is really turbocharged where we are today.
So I think that's all part of the same lineage. We couldn't have predicted every twist and turn there, but I think the general direction we were going in was the right one.
Yeah. And in fact, it's fascinating because if you read your old papers or Shane's old papers, Shane's thesis, I think in 2009, he said, well, the way we would the way we would test for AI is if you can compress Wikipedia and that's like literally the last function of our alums.
Or like your own paper in like 2016 before Transformers where you said like you were comparing neuroscience and AI and he said attention is what is needed. Yes, exactly.
Yeah, exactly. So we had these things called out and actually we had some early attention papers, but they weren't as elegant as Transformers in the end, neural neural truing machines and things like this.
Yeah. And then Transformers was the nicer and more general architecture of that.
Yeah, yeah, yeah. When you extrapolate all this out forward and you think about superhuman intelligence, what does that landscape look like to you? Is it still controlled by a private company? What should the governance of that look like concretely? Yeah, look, I would love, you know, I think that this has to be, this is so consequential, this technology.
I think it's much bigger than any one company or even industry in general. I think it has to be a big collaboration with many stakeholders from civil society, academia, government.
And the good news is I think with the the popularity of the recent chatbot systems and so on, I think that has woken up many of these other parts of society that this is coming and what it will be like to interact with these systems. And that's great.
So it's opened up lots of doors for very good conversations. I mean, an example of that was the safety summit in the UK hosted a few months ago, which I thought was a big success to start getting this international dialogue going.
And, you know, I think the whole of society needs to be involved in deciding what do we want to deploy these models for? How do we want to use them? What do we not want to use them for? You know, I think we've got to try and get some international consensus around that. And then also making sure that the benefits of these systems benefit everyone, you know, for the good of everyone and society in general.
And that's why I push so hard things like AI for science. And I hope that, you know, with things like our spin out isomorphic, we're going to start curing diseases, you know, terrible diseases with AI and accelerate drug discovery, amazing things, climate change and other things, I think big challenges that face us and face humanity, massive challenges, actually, which I'm optimistic we can solve because we've got this incredibly powerful tool coming along down the line of AI that we can apply and I think help us solve many of these problems.
So, you know, ideally, we would have a big consensus around that and a big discussion, you know, sort of almost like the UN level, if possible. You know, one interesting thing is if you look at these systems, you chat with them and they're immensely powerful and intelligent.
But it's interesting to the extent of which they haven't like automated large sections of the economy yet. Whereas five years ago, I showed you Gemini, you'd be like, wow, this is like, you know, totally coming for a lot of things.
So how do you account for that? Like what's going on where it hasn't had the broader impact? Yeah, I think it's we're still I think that just shows we're still at the beginning of this new era. And I think that for these systems, I think there are some interesting use cases, you know, you know, where you can use things to some, you know, these these these chatbot systems to summarize stuff for you and and maybe do some simple writing and maybe more kind of boilerplate type writing.
But that's only a small part of what, you know, we all do every day. So I think for more general use cases, I think we still need new capabilities, things like planning and search, but also maybe things like personalization and memory, episodic memory.
So not just long context windows, but actually remembering what we spoke about 100 conversations ago. And I think once those start coming in, I mean, I'm really looking forward to things like recommendation systems that help me find better, more enriching material, whether that's books or films or music and so on.
You know, I would use that type of system every day. So I think we're just scratching the surface of what these AI, say, assistants could actually do for us in our general everyday lives.
And also in our work context as well, I think they're not reliable yet enough to do things like science with them. But I think one day, you know, once we fix factuality and grounding and other things, I think they could end up becoming the world's best research assistant for you as a scientist or as a clinician.
I want to ask about memory, by the way. You had this fascinating paper in 2007 where you talk about the links between memory and imagination and how they, in some sense, are very similar.
People often claim that these models are just memorizing. How do you think about that claim that people make? Is memorization all you need? Because in some deep sense, that's compression.
What's your intuition here? Yeah, I mean, sort of at the limit, one maybe could try and memorize everything, but it wouldn't generalize out of your distribution. And I think these systems are clearly, I think the early criticisms of these early systems were that they were just regurgitating and memorizing.
But I think clearly the new era, the Gemini GPT-4 type era, they are definitely generalizing to new constructs. But actually, in my thesis and that paper particularly that started that area of imagination in neuroscience was showing that, you know, first of all, memory, certainly at least human memory is a reconstructive process.
It's not a videotape, right? We sort of put it together back from components that seems familiar to us, the ensemble. And that's what made me think that imagination might be the same thing, except in this case, you're using the same semantic components.
But now you're putting it together into a way that your brain thinks is novel, right, for a particular purpose like planning. And so I do think that that kind of idea is still probably missing from our current systems, this sort of pulling together different parts of your world model to simulate something new that then helps with your planning, which is what I would call imagination.
Yeah, for sure. So yeah,, now you guys have the best models in the world, you know, with the Gemini models.
Do you plan on putting out some sort of framework like the other two major AI labs have of, you know, once we see these specific capabilities, unless we have these specific safeguards, we're not going to continue development or we're not going to ship the product out? Yes, we have actually, I mean, we have already lots of internal checks and balances, but we're going to start publishing. Actually, you know, sort of watch this space as we're working on a whole bunch of blog posts and technical papers that we'll be putting out in the next few months that, you know, along the similar lines of things like responsible scaling laws and so on.
We have those implicitly internally and various safety councils and so on, like shane chair and so on um but but uh it's time for us to talk about that more publicly i think so we'll be doing that throughout the course of the year that's great to hear um and another thing i'm curious about is um so it's not only the risk of like uh you know the deployed model being something that people can use to do bad things but also um rogue actors but foreign agents forth, being able to steal the weights and then fine tune them to do crazy things. How do you think about securing the weights to make sure something like this doesn't happen, making sure a very key group of people have access to them and so forth? Yeah, it's interesting.
So first of all, there's sort of two parts of this. One is security, one is open source, maybe we can discuss.
But the security, I think, is super key, like just as sort of normal cybersecurity type things. And I think we're lucky at Google Deep Mind, we're kind of behind Google's firewall and cloud protection, which is, you know, I think best in class in the world corporately.
So we already have that protection. And then behind that, we have specific Deep Mind protections within our code base.
So it's sort of a double layer of protection. So I feel pretty good about that.
I mean, you can never be complacent on that, but
I feel it's already sort of best in the world in terms of cyber defenses. But we've got to carry on
improving that. And again, things like the hardened sandboxes could be a way of doing that
as well. And maybe even there are specifically secure data centers or hardware solutions to this
I'm just... things like the hardened sandboxes could be a way of doing that as well.
And maybe even there are, you know, specifically secure data centers or hardware solutions to this too that we're thinking about. I think that maybe in the next three, four, five years, we would also want air gaps and various other things that are known in the security community.
So I think that's key. And I think all frontier labs should be doing that because otherwise, you know, nation states and other things, rogue nation states and other dangerous actors, there would be obviously a lot of incentive for them to steal things like the weights.
And then, you know, of course, open source is another interesting question, which is we're huge proponents of open source and open science. I mean, almost every, you know, we've published thousands of papers and things like AlphaFold and Transformers, of course, and and alpha go all of these things we put out there into the world uh published and open source many of them uh graphcast most recently our weather prediction system but when it comes to a uh you know the core technology the foundational technology and very general purpose i think the question i would have is um if you you know uh first sort of open proponents, is that how does one stop bad actors, individuals or up to rogue states, taking those same open source systems and repurposing them because their general purpose for harmful ends? So we have to answer that question.
And I haven't heard a compelling, I mean, I don't know what the answer is to that, but I haven't heard a compelling, clear answer to that from proponents of just sort of open sourcing everything. So I think there has to be some balance there, but obviously it's a complex question of to what that is.
Yeah, yeah. I feel like tech doesn't get the credit it deserves for like funding, hundreds of billions of dollars worth of R&D.
And obviously you have deep buying with systems like AlphaFold and so on. But when we talk about securing the weights, as we said, maybe right now it's not something that is going to cause the end of the world or anything.
But as these systems get better and better, the worry that a foreign agent or something gets access to them, presumably right now there's dozens to hundreds of researchers who have access to the weights. What's that what's a plan for like getting into like the situation getting the weights in a situation rooms if you're like if you need to access to them you it's like you know some extremely strenuous process you nobody nobody individual can really take them out yeah yeah i mean one has to balance that with with with allowing for collaboration speed of progress actually another interesting thing is you of course you want uh uh you know brilliant independent researchers from academia or things like the UK AI Safety Institute and US1 to be able to kind of red team these systems.
So one has to expose them to a certain extent, although that's not necessarily the weights. And then, you know, we have a lot of processes in place about making sure that, you know, only if you need them that you have access to, you know, those people who need access have access um
you a lot of processes in place about making sure that, you know, only if you need them that you have access to, you know, those people who need access have access. And right now, I think we're still in the early days of those kinds of systems being at risk.
And as that, as these systems become more powerful, more general and more capable, I think one has to look at the access question. Mm.
So some of these other labs have specialized in to safety, like Anthropik, for example, with interoperability. And do you have some sense of where you guys might have an edge as so that, you know, now that you have the frontier model, you're going to scale up safety, where you guys are going to be able to put out the best frontier research on safety? Yeah, I think, you know, well, we helped pioneer our LHF and other things like that, which can also be obviously used for performance, but also for safety.
I think that, you know, a lot of the self-play ideas and these kinds of things could also be used potentially to auto test a lot of the boundary conditions that you have with the new systems. I mean, part of the issue is that with these sort of very general systems, there's so much surface area to cover about how these systems behave.
So I think we are going to need some automated testing. And again, with things like simulations and games, very realistic environments, virtual environments, I think we have a long history in that and using those kinds of systems and making use of them for building AI algorithms.
So I think we can leverage all of that history. And then, you know, around at Google, we're very lucky.
We have some of the world's best cybersecurity experts, hardware designers. So I think we can bring that to bear in, you know, for security and safety as well.
Great, great. Let's talk about Gemini.
Yeah. So, you know, now you guys have the best model in the world.
So I'm curious, the default way to interact with these systems has been through chat so far. Now that we have multimodal and all these new capabilities, how do you anticipate that changing? Or do you think that'll still be the case? Yeah, I think we're just at the beginning of actually understanding what a full multimodal model system, how exciting that might be to interact with.
And it'll be quite different to, I think, what we're used to today with the chatbots. I think the next versions of this over in the next year, 18 months, maybe we'll have some contextual understanding around the environment around you through a camera or whatever it is, a phone.
I could imagine that as the next awesome glasses, the next step. And then I think that we'll start becoming more fluid in understanding, oh, let's sample from a video.
Let's use voice. Maybe even eventually things like touch.
And if you think about robotics and other things, sensors, other types of sensors. So I think the world's about to become very exciting, I think, in the next few years as we start getting used to the idea of what true multimodality means.
On the robotics subject, Ilya said when I was on the podcast that the reason OpenAI gave up on robotics was because they didn't have enough data in that domain, at least at the time they were pursuing it. I mean, you guys have put out different things like Robotransformer and other things.
Do you think that's still a bottleneck for robotics progress or will we see progress in the world of atoms as well as the world of bits? Yeah, well, we're very excited about our progress with things like Gato and RT2, you know, Robotic Transformer. And we actually think, so we've always liked robotics and we've had, you know, amazing research and then we still have that going now because we like the fact that it's a data poor regime because that pushes us on very interesting research directions that we think are going to be useful anyway, like sampling efficiency and data efficiency in general and transfer learning, learning from simulation, transferring that to reality.
All of these very, you know, sim to real, all of these very interesting, actually general challenges that we would like to solve. So the control problem.
So we've always pushed hard on that. And actually, I think, so Ilya's right, that is more challenging because of the data problem.
But it's also, I think we're starting to see the beginnings of these large models being transferable to the robotics regime, learning in the general domain, language domain, and other things, and then just treating tokens like Gato as any type of token. The token could be an action, it could be a word, it could be part of an image, a pixel, or whatever it is.
And that's what I think true multimodality is. And to begin with, it's harder to train a system like that than a straightforward text language system.
But actually, you know, going back to our early conversation of transfer learning, you start seeing that a true multimodal system, the other modalities benefit some different modalities.
So you get better at language because you now understand a little bit about video.
So I do think it's harder to get going, but actually ultimately we'll have a more general, more capable system like that. Whatever happened to Gato? Like that was super fascinating that you could have like play games and also do like video and also do text.
Yeah, we're still working on those kinds of systems, but you can imagine we're just trying to, those ideas we're trying to build into our future generations of Gemini, you know, to be able to do all of those things and, and, and robotics transformers and, you know, things like that are kind of, you can think of them as sort of follow-ups to that. Well, we see asymmetric progress towards the domains in which the self-play kinds of things we're talking about will be especially powerful.
So math and code, you know, obviously recently you have these papers out about this, um, or, yeah, you can, you can use these things to do, um, uh, really cool novel things. Uh, will they just be like superhuman coders, but like in other ways, they might be still worse than humans or how do you think about that? Yeah.
So look, I think that, that, that, um, you know, we're making great progress with math and, and, and, and things like theorem proving and coding. Um, but, uh, it's still interesting, you know, if one looks at, uh, I mean creativity in general, and scientific endeavor in general, I think we're getting to the stage where our systems could help the best human scientists make their breakthroughs quicker, like almost triage the search space in some ways, or perhaps find a solution like AlphaFold does with a protein structure.
But it can't, it's, they're not at the level where they can create the hypothesis themselves, or ask the right question. And any, as any top scientists will tell you, that that's the hardest part of science is actually asking the right question, boiling down that space to like, what's the critical question we should go after the critical problem, and then formulating that problem in the right way to attack it.
And that's not something our systems, well, we really have any idea how our systems could do. But they can, they are suitable for searching large combinatorial spaces, if one can specify the problem in that way with a clear objective function.
So that's very useful for already many of the problems we deal with today, but not the most high level creative problems. Hmm.
So DeepMind obviously has published all kinds of interesting stuff and, you know, speeding up science in different areas. How do you think about that in the context of if you think AGI is going to happen in the next 10, 20 years, why not just wait for the AGI to do it for you? Why build these domain specific solutions? Yeah, well, I think we don't know how long AGI is going to be.
And we always used to say, you know, back even when we started DeepMind that we don't have to wait for AGI in order to bring incredible benefits to the world. And especially, you know, my personal passion has been AI for science and health.
And you can see that with things like AlphaFold and all of our various nature papers of different domains, our material science work and so on. I think there's lots of exciting directions and also impact in the world through products, too.
I think it's very exciting and a huge opportunity, unique opportunity we have as part of Google of, you know, they got dozens of billion user products, right, that we can immediately ship our advances into and then billions of people can, you know, improve their daily lives, right, and enriches their daily lives and enhances their daily lives. So I think it's a fantastic opportunity for impact on all those fronts.
And I think the other reason from a point of view of But AGI specifically is that it battle tests your ideas, right?
So you don't want to be in a sort of research bunker where you just, you know, theoretically are pushing some things forward, but then actually your internal metrics start deviating from real world things that people would care about, right? Or real world impact. So you get a lot of feedback, direct feedback from these real world applications that then tells you whether your systems really are scaling or actually is, you know, do we need to be more data efficient or sample efficient because most real world challenges require that, right? And so it kind of keeps you honest and pushes you, you know, keep sort of nudging and steering your research directions to make sure they're on the right path so i think it's fantastic and of course the world benefits from that society benefits from that on the way many many maybe many many years before agi arrives yeah um well the the development of gemini is super interesting because it comes right at the heels of merging these different organizations, brain and deep mind.
Yeah, I'm curious, what have been the challenges there?
What have been the synergies?
And it's been successful in the sense that you have the best model in the world now. Well, look, it's been fantastic, actually, over the last year.
Of course, it's been challenging to do that, like any big integration coming together. but you're talking about two world-class organizations,
long storied histories of inventing many important things from deep reinforcement learning to transformers. And so it's very exciting actually pulling all of that together and collaborating much more closely.
We always used to be collaborating, but more on a sort of project by project basis versus a much deeper, broader collaboration that we have now. And Gemini is the first fruit of that collaboration, including the name Gemini, actually, implying twins.
And of course, a lot of other things are made more efficient, like pooling compute resources together and ideas and engineering, which I think at the stage we're at now where there's huge amounts of world-class engineering that has to go on to build the frontier systems, I think it makes sense to coordinate that more closely. Yeah.
So, I mean, you and Shane started DeepMind partly because you were concerned about safety. You saw AGI coming as like a live possibility.
Do you think the people who were formerly part of Brain, the half of Google DeepMind now, do you think they approach it in the same way? Have there been cultural differences there in terms of that question? Yeah, no, I think overall, and this is why, you know, I think one of the reasons we joined forces with Google back in 2014 is, I think, the entirety of Google and Alphabet, not just Brain and DeepMind take these questions very seriously of responsibility. And, you know, our kind of mantra is to try and be bold and responsible with these systems.
So, you know, I would class it as I'm obviously a huge techno optimist, but I want us to be
cautious with that, given the transformative power of what we're bringing into the world,
you know, collectively. And I think it's important, you know, I think it's going to be one of the
most important technologies humanity will ever invent. So we've got to put, you know, all our
efforts into getting this right and be thoughtful and sort of also humble about what we know and don't know about the systems that are coming and the uncertainties around that. And in my view, the only sensible approach when you have huge uncertainty is to be sort of cautiously optimistic and use the scientific method to try and have as much foresight and understanding about what's coming down the line and the consequences of that before it happens.
You don't want to be live A-B testing out in the world with these very consequential systems because unintended consequences may be quite severe. So I want us to move away as a field from a sort of move fast and break things attitude, which is, you know, maybe served the valley very well in the past and obviously created important innovations.
But but I think in this case, you know, we want to be bold with the with the positive things that it can do and make sure we realize things like medicine and science and advancing all of those things whilst being, you know, responsible and thoughtful with with as far as possible with with mitigating the risks. Yeah, yeah.
And that's why it seems like the responsible scaling policies are something that is a very good empirical way to pre-commit to these kinds of things. Yes, exactly.
Yeah. And I'm curious if you have a sense of, for example, when you're doing these evaluations, if it turns out your next model could help a layperson build a pandemic class or bioweapon or something, how you would think, first of all, of making sure those weights are secure so that that doesn't get out? And second, what would have to be true for you to be comfortable deploying that system? How comfortable, like, how would you make sure that that latent capability isn't exposed? Yeah, well, first, I mean, you know, the secure model part, I think we've covered with the cybersecurity and make sure that's that's well-classed and you're monitoring all those things.
I think if a capability was, was, was discovered like that through red teaming or external testing by, you know, government institutes and academia or whatever, independent testers, then we would have to fix that, that loophole would depending what it was, right. If that required more, a different kind of, perhaps constitution or different guardrails or more RLHF to avoid that or removing some training data.
I mean, depending on what the problem is, I think there could be a number of mitigations. And so the first part is making sure you detect it ahead of time.
So that's about the right evaluations and right benchmarking and right testing. And then the question is how one would fix that before you deployed it.
But I think it would need to be fixed before it was deployed generally, for sure, if that was an exposure service. Right, right.
Final question. You've been thinking in terms of the end goal of AGI at a time when other people thought it was ridiculous in 2010.
Now that we're seeing this like slow takeoff where we're actually seeing these like generalization and intelligence, what is like the psychologically seeing this? What has that been like? Have you just like sort of priced into your world model? So you like, it's not new news for you? Or is it like actually just seeing it live? You're like, wow, like, this is something's like really changed or what does it feel like? Yeah, well, for me yes it's it's already priced into my world model of how things were going to go at least from the technology side but um obviously i didn't we didn't necessarily anticipate um the general public would be that interested this early in the sequence right of of things like maybe one could think of if we were to produce more if if say like a chat gpt and and chatb hadn't got the kind of got the interest they'd ended up getting, which I think was quite surprising to everyone that people were ready to use these things, even though they were lacking in certain directions, right? Impressive though they are. Then we would have produced more specialized systems, I think, built off of the main track like Alpha Folds and Alpha Goes so on, and our scientific work.
And then I think the general public maybe would have only paid attention later down the road, where in a few years' time, where we have more generally useful assistant-type systems. So that's been interesting.
So that's created a different type of environment that we're now all operating in as a field. And it's a little bit little bit more chaotic because there's so many more things going on and there's so much VC money going into it and everyone's sort of almost losing their minds over it, I think.
And what I just, the thing I worry about is I want to make sure that as a field, we act responsibly and thoughtfully and scientifically about this and use the scientific method to approach this in a, as I said, an optimistic but careful way. And I think that's the, I've always believed that's the right approach for something like AI.
And I just hope that doesn't get lost in this huge rush. Sure, sure.
Well, I think that's a great place to close. Demis, thanks to you.
Thank you so much for your time and for coming on the podcast. Thanks.
It's been a real pleasure.
Hey, everybody.
I hope you enjoyed that episode.
As always, the most helpful thing you can do is to share the podcast.
Send it to people you think might enjoy it.
Put it in Twitter, your group chats, etc.
Just blitz the world.
Appreciate you listening.
I'll see you next time.
Cheers.