Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat
Here is my episode with Demis Hassabis, CEO of Google DeepMind
We discuss:
* Why scaling is an artform
* Adding search, planning, & AlphaZero type training atop LLMs
* Making sure rogue nations can't steal weights
* The right way to align superhuman AIs and do an intelligence explosion
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Timestamps
(0:00:00) - Nature of intelligence
(0:05:56) - RL atop LLMs
(0:16:31) - Scaling and alignment
(0:24:13) - Timelines and intelligence explosion
(0:28:42) - Gemini training
(0:35:30) - Governance of superhuman AIs
(0:40:42) - Safety, open source, and security of weights
(0:47:00) - Multimodal and further progress
(0:54:18) - Inside Google DeepMind
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Press play and read along
Transcript
Speaker 1 So, I wouldn't be surprised if we had HEI-like systems within the next decade.
Speaker 1 It was pretty surprising to almost everyone, including the people who first worked on the scaling hypotheses, that how far it's gone.
Speaker 1 In a way, I look at the large models today, and I think they're almost unreasonably effective for what they are. It's an empirical question whether that will hit an asymptote or a brick wall.
Speaker 1 I think no one knows.
Speaker 2 When you think about superhuman intelligence, is it like still controlled by a private company?
Speaker 1 As Gemini are becoming more multimodal and we start ingesting audio-visual data as well as text data, I do think our systems are going to start to understand the physics of the real world better.
Speaker 1 The world's about to become very exciting, I think, in the next few years as we start getting used to the idea of what true multimodality means.
Speaker 2 Okay, today it is a true honor to speak with Demis Asavas, who is the CEO of DeepMind. Demis, welcome to the podcast.
Speaker 1 Thanks for having me.
Speaker 2 First question. Given your neuroscience background, how do you think about intelligence?
Speaker 2 Specifically, do you think it's like one higher-level general reasoning circuit or do you think it's thousands of independent sub-skills and heuristics?
Speaker 1 Well, it's interesting because intelligence is so
Speaker 1 broad and
Speaker 1 what we use it for is so sort of generally applicable. I think that suggests that there must be some sort of high-level
Speaker 1 common things,
Speaker 1 common kind of algorithmic themes, I think, around how the brain processes the world around us. So,
Speaker 1 of course, then there are specialized parts of the brain that do specific things.
Speaker 1 But I think there are probably some underlying principles that underpin all of that.
Speaker 2 Yeah. How do you make sense of the fact that in these LLMs though, when you give them a lot of data in any specific domain, they tend to get asymmetrically better in that domain.
Speaker 2 Wouldn't we expect a sort of like general improvement across all the different areas?
Speaker 1 Well I think you first of all, I think you do actually sometimes get surprising improvement in other domains when you improve in a specific domain.
Speaker 1 So for example, when these large models sort of improve at coding, that can actually improve their general reasoning.
Speaker 1 So, there is some evidence of some transfer, although I think we would like a lot more evidence of that.
Speaker 1 But also, you know, that's how the human brain learns too.
Speaker 1 If we experience and practice a lot of things like chess or writing, creative writing, or whatever that is, we also tend to specialize and get better at that specific thing, even though we're using sort of general learning techniques and general learning systems in order to
Speaker 1 get good at that domain.
Speaker 2 Yeah. What's been the most surprising example of this kind of transfer for you? Like you see language and code or images and text.
Speaker 1 Yeah, I think probably, I mean, I'm hoping we're going to see a lot more of this kind of transfer, but I think things like getting better at coding and math, then generally improving your reasoning, that is how it works with us as human learners.
Speaker 1 But I think it's interesting seeing that
Speaker 1 in these artificial systems.
Speaker 2 And can you see the sort of mechanistic way in which,
Speaker 2 let's say in the language and code example, there's like, I found the place in a neural network that's getting better with both the language and the code, or is it that too far down the week?
Speaker 1 Yeah, well, I don't think our
Speaker 1 analysis techniques are quite sophisticated enough to be able to hone in on that.
Speaker 1 I think that's actually one of the areas that a lot more research needs to be done on kind of mechanistic analysis of the representations that these systems build up.
Speaker 1 And, you know, I sometimes like to call it virtual brain analytics. In a way, it's a bit like doing fMRI or single-cell recording from a real brain.
Speaker 1 What's the analogous sort of analysis techniques for these artificial minds? And there's a lot of great work going on on this sort of stuff.
Speaker 1 People like Chris Ola, I really like his work, and a lot of computational neuroscience techniques I think could be brought to bear on analyzing these current systems we're building.
Speaker 1 In fact, I try to encourage a lot of my computational neuroscience friends to start thinking in that direction and applying their know-how
Speaker 1 to
Speaker 1 the large models. Yeah,
Speaker 2 what do other AI researchers not understand about human intelligence that
Speaker 2 you have some sort of like insight on given your neuroscience background?
Speaker 1 I think
Speaker 1 neuroscience has added a lot. If you look at the last sort of 10, 20 years that we've been at it, at least, and I've been thinking about this for 30 plus years,
Speaker 1 I think in the earlier days of the sort of new wave of AI, I think neuroscience was providing a lot of interesting directional clues.
Speaker 1 So things like reinforcement learning, combining that with deep learning, some of our pioneering work we did there, things like experience replay,
Speaker 1 even the notion of attention, which has become super important.
Speaker 1 A lot of those original sort of inspirations come from some understanding about how the brain works. Not the exact specifics.
Speaker 1 Of course, you know, one's an engineered system, the other one's a natural system. So, it's not so much about a one-to-one mapping of a specific algorithm.
Speaker 1 It's more kind of inspirational direction, maybe some ideas for architecture, or algorithmic ideas, or representational ideas.
Speaker 1 And because you know, the brains in existence proof that general intelligence is possible at all, I think
Speaker 1 the history of human endeavors has been that once you know something's possible, it's easier to push hard in that direction because you know it's a question of effort then and sort of a question of when, not if.
Speaker 1 And that allows you to, you know, I think make progress a lot more quickly. So I think neuroscience has had a lot of,
Speaker 1 has inspired a lot of the thinking,
Speaker 1 at least in a soft way, behind where we are today.
Speaker 1 But as for going forwards,
Speaker 1 I think that there's still a lot of interesting
Speaker 1 things to be resolved around planning and how does the brain construct the right world models.
Speaker 1 I studied, for example, how the brain does imagination, or you can think of it as mental simulation.
Speaker 1 So how do we create very rich visual spatial simulations of the world in order for us to plan better?
Speaker 2 Yeah, actually, I'm curious how you think that will sort of interface with LLM.
Speaker 2 So obviously, DeepMind is at the frontier and has been for many years, you know, with systems like Office Zero and so forth, of having these agents who can like think through different steps to get to an end outcome.
Speaker 2 Will this just be, is the path for LLMs to have this sort of tree search kind of thing on top of them? How do you think about this?
Speaker 1 I think that's a super promising direction in my opinion. So we've got to carry on improving the large models and we've got to carry on
Speaker 1 basically making them more and more accurate predictors of the world. So in effect, making them more and more reliable world models.
Speaker 1 That's clearly a necessary, but I would say probably not sufficient component of an AGI system.
Speaker 1 And then on top of that, I would, you know, we're working on things like alpha zero-like planning mechanisms on top that make use of that model in order to make concrete plans to achieve certain goals in the world
Speaker 1 and perhaps sort of chain, you know,
Speaker 1
chain thought together or lines of reasoning together. and maybe use search to kind of explore massive spaces of possibility.
I think that's kind of missing from our current large models.
Speaker 2 How do you get past the sort of
Speaker 2 immense amount of compute that these approaches tend to require? So, even the Ophago system was a pretty expensive system because you had to do this sort of running an LLM on each node of the tree.
Speaker 2 How do you anticipate that'll get more made more efficient?
Speaker 1 Well, I mean, one thing is Moore's Law
Speaker 1 tends to help if
Speaker 1 every year, of course,
Speaker 1 more computation comes in. But we focus a lot on efficient, you know, sample efficient methods and
Speaker 1 reusing existing data, things like experience replay,
Speaker 1 and also just looking at more efficient ways. I mean, the better your world model is, the more efficient your search can be.
Speaker 1 So one example I always give with AlphaZero, our system to play Go and chess and any game is that it's stronger than world champion level, human world champion level, all these games.
Speaker 1 And it uses a lot less search than a brute force method like deep blue, say to play chess.
Speaker 1 blue one of these traditional stockfish or deep blue systems would maybe look at millions of possible moves for every decision it's going to make alpha zero and alpha go made you know looked at around 10 tens of thousands of um possible positions in order to make a decision about what to move next but a human grandmaster a human world champion uh probably only looks at a few hundreds of moves even the top ones in order to make their very uh good decision about what to play next.
Speaker 1 So that suggests that obviously the brute force systems don't have any real model other than the heuristics about the game. AlphaZero has quite a decent model,
Speaker 1 but the human, you know, human, top human players have a much richer, much more accurate model than of Go or chess. So that allows them to make world-class decisions on a very small amount of search.
Speaker 1 So I think there's still, there's a sort of trade-off there, like, you know, if you improve the models, then I think your search can be more efficient and therefore you can get further with your search.
Speaker 1 Yeah.
Speaker 2 I have two questions based on that. The first being with Alpha's Go, you had a very concrete win condition of, you know, at the end of the day, do I win this game or go or not?
Speaker 2 And you can reinforce on that.
Speaker 2 When you're just thinking of like an LLM putting out thought,
Speaker 2 do you think there will be this kind of ability to discriminate in the end, whether that was like a good thing to reward or not?
Speaker 1 Well, of course, that's why we, you know, we pioneered and DeepMind's sort of famous for using games as a proving ground.
Speaker 1 Partly because obviously it's efficient to research in that domain.
Speaker 1 But the other reason is obviously it's extremely easy to specify a reward function, winning the game or improving the score or something like that sort of built into most games. So
Speaker 1 that is one of the challenges of real world systems is how does one define the right objective function, the right reward function, and the right goals and specify them
Speaker 1
in a general way, but they're specific enough and actually points the system. system in the right direction.
And for real world problems, that can be a lot harder.
Speaker 1 But actually, if you think about it in even scientific problems, there are usually ways that you can specify the goal that you're after.
Speaker 2 And then when you think about human intelligence, you're just saying, well, you know, the humans thinking about these thoughts are just super sample efficient.
Speaker 2 Einstein coming up with relativity, right? There's just like thousands of possible permutations of the equations.
Speaker 2 Do you think it's also this sort of sense of like different heuristics of like, I'm going to try this approach instead of this?
Speaker 2 Or is it a totally different way of approaching, coming up with that solution than what Ophelago does to plan the next move?
Speaker 1 Yeah.
Speaker 1 Well, look, i think it's different because there's our brains are not built for doing monte carlo tree search right um it's it's it's just not the way uh uh our organic brains would work um so i think that in order to compensate for that you know people like einstein have come up you know their brains have using their intuition and you know we maybe come to what intuition is but they use their sort of knowledge and their experience to build extremely you know, in Einstein's case, extremely accurate models of physics, including these sort of mental simulations.
Speaker 1 I think if you read about einstein and how he came up with things he used to visualize and sort of uh really kind of um uh feel what these physical systems should be like not just the mathematics of it but have a really intuitive feel for what they would be like in reality and that allowed him to think these these these sort of very outlandish thoughts at the time um so i think that it's it's it's the sophistication of the world models that we're building which then you know if you imagine your world model can get you to a certain node in a in a tree that you're searching and then you just do a little bit of search around that node, that leaf node, and that gets you to these original places.
Speaker 1 But obviously, if your model is and your judgment on that model is very, very good, then you can pick which leaf nodes you should sort of expand with search much more accurately.
Speaker 1
So therefore, overall, you do a lot less search. I mean, there's no way that, you know, any human could do a kind of brute force search over any kind of significant space.
Yeah, yeah, yeah.
Speaker 2 A big sort of open question right now is whether RL will allow these models to do the self-play synthetic data to get over the data bottleneck. It sounds like you're optimistic about this.
Speaker 1 Yeah, I'm very optimistic about that. I mean, I think,
Speaker 1 well, first of all, there's still a lot more data, I think, that can be used, especially if one views like multimodal and video and these kind of things.
Speaker 1 And obviously, you know, society is adding more data all the time.
Speaker 1 But I think
Speaker 1 to the internet and things like that. But I think that there's a lot of scope for creating synthetic data.
Speaker 1 We're looking at that in different ways, partly through simulation, using very realistic games environments, for example, to generate realistic data, but also self-play. So that's where systems
Speaker 1 interact with each other or converse with each other.
Speaker 1 And in the sense of, you know, worked very well for us with AlphaGo and AlphaZero, where we got the systems to play against each other and actually learn from each other's mistakes and build up a knowledge base that way.
Speaker 1 And I think there are some good analogies for that. It's a little bit more complicated, but to build a general kind of world data.
Speaker 2 How How do you get to the point where these models, the sort of synthetic data they're outputting on the self-play, they're doing,
Speaker 2 is not just more of what they've already got in their data set, but is something they haven't seen before?
Speaker 2 You know what I mean, to actually improve the abilities?
Speaker 1 Yeah, so there I think there's a whole science needed. And I think we're still in the nascent stage of this of data curation and data analysis.
Speaker 1 So actually analyzing the holes that you have in your data distribution, and this is important for things like fairness and bias and other stuff to remove that from the system, is to try and really make sure that your data set is representative of the distribution you're trying to learn.
Speaker 1 And
Speaker 1 there are many tricks there one can use like overweighting or replaying certain parts of the data.
Speaker 1 Or you could imagine if you identify some gap in your data set, that's where you put your synthetic generation capabilities to work on.
Speaker 2 Yeah. So
Speaker 2 nowadays people are paying attention to
Speaker 2 the RL stuff that
Speaker 2 DeepMind did many years before.
Speaker 2 What are the sort of either early research directions or something that was done way back in the past, but people just haven't been paying attention to that you think will be a big deal, right?
Speaker 2 Like there's a time where people weren't paying attention to scaling. What's the thing now where it's like totally underrated?
Speaker 1 Well, actually, I think that, you know, there's the history of the sort of last couple of decades has been things coming in and out of fashion, right?
Speaker 1 And I do feel like a while ago and, you know, maybe five plus years ago when we were pioneering with AlphaGo and before that, DQN, where it was the first system
Speaker 1 that worked on Atari, but our first big system really more than 10 years ago now that scaled up Q learning and reinforcement learning techniques to deal, you know, combine that with deep learning to create deep reinforcement learning and then use that to scale up to complete some, you know, master some pretty complex tasks like playing Atari games just from the pixels.
Speaker 1 And I do actually think a lot of those ideas need to come back in again.
Speaker 1 And as we talked about earlier, combine it with the new advances in large models and large multimodal models, which is obviously very exciting as well.
Speaker 1 So I do think there's a lot of potential for combining some of those older ideas together with the newer ones.
Speaker 2 Is there any potential for something to come,
Speaker 2 the AGI to eventually come from just a pure RL approach?
Speaker 2 Like the way we're talking about it, it sounds like there'll be the LLM will form the ripe prior and then this sort of tree search will go on top of that.
Speaker 2 Or is there a possibility of just like completely out of the battle?
Speaker 1 I think I certainly, you know, theoretically, I think there's no reason why you couldn't go full alpha zero on it. And there are some people
Speaker 1 here at Deep Mind, at Google DeepMind, and in the RL community who work on that, right? Fully assuming no priors,
Speaker 1 no data, and just build all knowledge from scratch.
Speaker 1 And I think that's valuable because, of course, you could, you know, those system, those ideas, and those algorithms should also work when you have some knowledge, too.
Speaker 1 But having said that, I think by far, probably my betting would be the quickest way to get to AGI and the most likely plausible way is to use all the knowledge that's existing in the world right now on things like the web and that we've collected.
Speaker 1 And we have these scalable algorithms like
Speaker 1 transformers that are capable of ingesting all of that information.
Speaker 1 And I don't see why you wouldn't start with a model as a kind of prior or to build on and to make predictions that helps bootstrap your learning.
Speaker 1 I just think it doesn't make sense not to make use of that. So my betting would be is that
Speaker 1 the final AGI system will have these large multimodal models as part of
Speaker 1 the overall solution, but probably won't be enough on their own. You will need this additional planning search on top.
Speaker 2 Okay, this sounds like the answer to the question I'm about to ask, which is:
Speaker 2 as somebody who's been in this field for a long time and seen different trends come and go, what do you think that the strong version of the scaling hypothesis gets right and what does it get wrong?
Speaker 2 Just the idea that you just throw enough compute at a wide enough distribution of data and you get intelligence.
Speaker 1 Yeah, look, my view is this is kind of an empirical question right now.
Speaker 1 So I think it was pretty surprising to almost everyone, including the people who first worked on the scaling hypotheses, that how far it's gone.
Speaker 1 In a way, I mean, I sort of look at uh the large models today and i think they're almost unreasonably effective for what they are you know um i think it's pretty surprising some of the properties that emerge things like you know it's clearly in my opinion got some form of concepts and abstractions and some things like that and i think if we were talking five plus years ago i would have said to you maybe we need an additional algorithmic breakthrough uh in order to to do that like um you know maybe more like the brain works.
Speaker 1 And I think that's still true if we want explicit abstract concepts, neat concepts. But it seems that these systems can implicitly learn that.
Speaker 1 Another really interesting, I think, unexpected thing was that these systems have some sort of grounding,
Speaker 1 you know, even though they don't experience the world multimodally, or at least until more recently, when we have the multimodal models.
Speaker 1 And that's surprising that the amount of information that can be, and models that can be built up just from language. And I think that I'd have some hypotheses about why that is.
Speaker 1 I think we get some grounding through the RLHF feedback systems because obviously the human raters are by definition grounded uh uh uh uh grounded people we're grounded right in in reality so our feedback's also grounded so perhaps there's some grounding coming in through there and also maybe language contains more grounding you know if you in the if you if you're able to ingest all of it yeah than we than we perhaps thought or linguists perhaps thought before so it's actually some very interesting philosophical questions i think we haven't we people haven't even really scratched the surface of yet uh that looking at the advances that have been made um you know it's quite interesting to think about where it's going to go next but in terms of your question of like the, you know, large models, I think we've got to push scaling as hard as we can.
Speaker 1
And that's what we're doing here. And, you know, it's an empirical question whether that will hit an asymptote or a brick wall.
And there are, you know, different people that argue about that.
Speaker 1 But actually, I think we should just test it. I think no one knows.
Speaker 1 And but in the meantime, we should also double down on innovation and invention. And this is something that
Speaker 1 Google Research and DeepMind and Google Brain have have have have, you know, we've pioneered many, many things over the last decade that's something that's our bread and butter and um you know you can think of half our effort as to do with scaling and half our efforts to do with inventing the next architectures the next algorithms that will be needed um knowing that you've got this scaled larger and larger model coming along the lines so i i i my betting right now but it's a loose betting is that you would need both um but i think you know it's you've got to push both of them as hard as possible.
Speaker 1 And we're in a lucky position that we can do that. Yeah.
Speaker 2 I want to ask more about the grounding. So you can imagine two things that might change, which would make the grounding more difficult.
Speaker 2 One is that as these models get smarter, they're going to be able to operate in domains where we just can't generate enough human labels just because we're not smart enough, right?
Speaker 2 So if it does like a million line pull request, you know, how do we tell it like this is this is within the constraints of our morality and the end goal we wanted and this isn't?
Speaker 2 And the other is, it sounds like you're saying more of the compute. So far we've been doing next token prediction.
Speaker 2 And in some sense, it's a guardrail because you have to talk as a human would talk and think as a human would think.
Speaker 2 Now, if additional compute is going to come in the form of reinforcement learning where it just gets the objective, we can't really trace how you got there.
Speaker 2 When you combine those two, how worried are you that the sort of grounding goes away?
Speaker 1 Well, look, I think
Speaker 1 if the grounding, you know, if it's not properly grounded, the system won't be able to achieve those goals properly, right? I think so.
Speaker 1 I think in a sense, you sort of have to have the grounding or at least some of it in order for a system to actually achieve goals in the real world.
Speaker 1 I do actually think that as these systems and things like Gemini are becoming more multimodal and we start ingesting things like video and
Speaker 1 audio-visual data as well as text data, and then the system starts correlating those things together,
Speaker 1 I think that is a form of proper grounding, actually.
Speaker 1 So I do think our systems are going to start to understand you know, the physics of the real world better.
Speaker 1 And then one could imagine the active version of that is being in a very realistic simulation or game environment where you're starting to learn about what your actions do in the world and how that affects
Speaker 1 the world itself, the world stay itself, but also what next learning episode you're getting.
Speaker 1 So, you know, these RL agents we've always been working on and pioneered, like AlphaZero and AlphaGo, they actually affect their active learners.
Speaker 1 What they decide to do next affects what the next learning piece of data or experience they're going to get. So there's this very interesting sort of feedback loop.
Speaker 1 And of course, if we ever want to be good at things like robotics, we're going to have to understand how to act in the real world.
Speaker 2 Yeah. So there's a grounding in terms of will the capabilities be able to proceed? Or will they be like enough in touch with the reality to be able to do the things we want?
Speaker 2 And there's another sense of grounding of we've gotten lucky in the sense that since they're trained on human thought, they like maybe think like a human.
Speaker 2 To what extent does that stay true when more of the compute for trading comes from just did you get the right outcome and not guardrail by like, are you like proceeding on the next token as a human would?
Speaker 2 Maybe the broader question I'll like pose to you is:
Speaker 2 and this is what I asked Shane as well: what would it take to align a system that's smarter than a human? Maybe things in alien concepts,
Speaker 2 and you can't like really monitor the million-line pull requests because you can't really understand the whole thing.
Speaker 1 Yeah, you can't give labels. Look, this is something Shane and I, and many others here, we've had that forefront of our minds for since before we started DeepMind.
Speaker 1 And because we planned for success crazy, you know, 2010, no one was thinking about AI, let alone AGI. But we already knew that if we could make progress with these systems and these ideas,
Speaker 1 you know, the technology that would be created would be unbelievably transformative. So we already were thinking 20 years ago about, well,
Speaker 1 what would the consequences of that be, both positive and negative?
Speaker 1 Of course, the positive direction is amazing science, things like AlphaFold, incredible breakthroughs in health and science and maths and discovery, scientific discovery.
Speaker 1 But then also, we've got to make sure these systems are sort of understandable and controllable.
Speaker 1 And I think there's sort of several, you know, this would be a whole sort of discussion in itself, but there are many, many ideas that people have from much more stringent eval systems.
Speaker 1 I think we don't have good enough evaluations and benchmarks for things like, can the system deceive you? Can it exfiltrate its own code? Sort of undesirable behaviors.
Speaker 1 And then there's,
Speaker 1 you know, ideas of actually using AI, maybe narrow AIs, so not general learning ones, but systems that are specialized for a domain to help us as the human scientists analyze and summarize what the more general system is doing.
Speaker 1 So kind of narrow AI tools.
Speaker 1 I think that there's a lot of promise in creating hardened sandboxes or simulations so that are hardened with cybersecurity
Speaker 1 arrangements around the simulation, both to keep the AI in, but also as cybersecurity to keep hackers out.
Speaker 1 And then you could experiment a lot more freely within that sandbox domain.
Speaker 1 And I think a lot of these ideas are, and there's many, many others, including the analysis stuff we talked about earlier, where can we analyze and understand what the concepts are that this system is building, what the representations are.
Speaker 1 So maybe they're not so alien to us and we can actually keep track of the kind of knowledge that it's building. Yep, yep.
Speaker 2
So being backup fit, I'm curious what your timelines are. So Shane said his like, I think modal outcome is 2028.
I think that's maybe his median. Yeah.
Speaker 1 What is yours? Yeah, well, I, you know,
Speaker 1 I don't have prescribed kind of specific numbers to it because I think there's so many unknowns and uncertainties and,
Speaker 1 human ingenuity and endeavor comes up with surprises all the time. So that could meaningfully move
Speaker 1 the timelines. But I will say that when we started DeepMind back in 2010, we thought of it as a 20-year project.
Speaker 1 And actually, I think we're on track, which is kind of amazing for 20-year projects because usually they're always 20 years away.
Speaker 1 So that's the joke about whatever it is, usually quantum AI, take your pick.
Speaker 1 But
Speaker 1 I think we're on track. So I wouldn't be surprised if we had AGI-like systems within the next decade.
Speaker 2 And do you buy the model that once you have an AGI, you have a system that basically speeds up further AI research?
Speaker 2 Maybe not like an overnight sense, but over the course of months and years, you have much faster progress than you would have otherwise had.
Speaker 1 I think that's potentially possible.
Speaker 1 I think it partly depends what we decide, we as society decide to use the first
Speaker 1 nascent AGI systems or even proto-AGI systems for.
Speaker 1
So, you know, even the current LLMs seem to be pretty good at coding. So, And we have systems like AlphaCode.
We also got theorem proving systems. So one could imagine combining these ideas together
Speaker 1 and making them a lot better. And then I could imagine these systems being quite good at
Speaker 1 designing and helping us build future versions of themselves.
Speaker 1 But we also have to think about the safety implications of that, of course.
Speaker 2 Yeah, I'm curious what you think about that.
Speaker 2 So I mean, I'm not saying this is happening this year or anything, but eventually you'll be developing a model where during the process of development, you think, you know, there's some chance that that once this is fully developed, it'll be capable of like an intelligence explosion-like dynamic.
Speaker 2 What would have to be true of that model at that point where you're like,
Speaker 2 you know, I've seen these specific evals, I've like, I've like understand its internal thinking enough and like its future thinking that I'm comfortable continuing development of the system.
Speaker 2 Well, look,
Speaker 1 we'd need a lot more understanding of the systems than we do today before I would be even confident of even explaining to you what we would need to tick box there.
Speaker 1 So I think actually what we've got to do in the next few years in the time we have before those systems start arriving is come up with the right evaluations and metrics, and maybe ideally formal proofs, but it's going to be hard for these types of systems, but at least empirical
Speaker 1
bounds around what these systems can do. And that's why I think about things like deception and as being quite root-node traits that you don't want.
Because if you're confident that your system
Speaker 1 is sort of exposing what it actually thinks, then you could potentially, that opens up possibilities of using the system itself to explain aspects of itself to you.
Speaker 1 The way I think about that actually is like if I was to play a game of chess against Garry Kasparov, right, which I played in the past, or Magnus Carlson, you know, the amazing chess players, graceful time,
Speaker 1 I wouldn't be able to come up with a move that they could, but they could explain to me
Speaker 1 why they came up with that move. And I could understand it
Speaker 1 post hoc, right? And that's the sort of thing one could imagine.
Speaker 1 One of the
Speaker 1 capabilities that we could make use of these systems is for them to explain it
Speaker 1
to us and even maybe the proofs behind why they're thinking something. Certainly in a mathematical set, any mathematical problem.
Got it.
Speaker 2 Do you have a sense of what the converse answer would be? So what would have to be true where tomorrow morning you're like, oh man, I didn't anticipate this.
Speaker 2 You see some specific observation tomorrow morning where like, we got to stop Gemini 2 training. Like
Speaker 2 what would specifically mean?
Speaker 1 Yeah, I could imagine that.
Speaker 1 And this is where, you know, things like the sandbox simulations, I would hope we're experimenting in a safe, secure environment. And then, you know, something happens in it where
Speaker 1 very unexpected happens, a new unexpected capability or something that we didn't want, you know, explicitly told the system we didn't want that it did, but then lied about.
Speaker 1 You know, these are the kinds of things where one would want to then dig in carefully
Speaker 1 now with the systems that are around today, which are not dangerous in my opinion today, but in a few years they might be, have potential.
Speaker 1 And then you would sort of ideally kind of pause and then really get to the bottom of why it was doing those things before one continued.
Speaker 2
Yeah. Going back to Gemini, I'm curious what the bottlenecks were in the development.
Like why not make it immediately one order of magnitude bigger
Speaker 2 if scaling works?
Speaker 1 Well, look, first of all, there are practical limits. How much compute can you actually fit in one data center? And actually, you know, you're bumping up against very interesting,
Speaker 1 you know, distributed computing kind of challenges, right?
Speaker 1 When fortunately we have some of the best people in the world on those challenges and, you know, cross-data center training, all these kinds of things, very interesting challenges, hardware challenges.
Speaker 1 And we have our TPUs and so on that we're building and designing all the time, as well as using Drew PUs. And so there's all of that.
Speaker 1 And then you also have to, the scaling laws, you know, they don't, they don't just work by magic.
Speaker 1 You sort of, you still need to scale up the hyperparameters and various innovations are going in all the time with each new scale. It's not just about repeating the same recipe.
Speaker 1
At each new scale, you have to adjust the recipe. And that's a bit of an art form in a way.
And you have to sort of almost get new data points.
Speaker 1 If you try and extend your predictions, extrapolate them, say, several orders of magnitude out, sometimes they don't hold anymore, right?
Speaker 1 Because new capabilities, they can be step functions in terms of new capabilities and
Speaker 1 some things hold and other things don't.
Speaker 1 So often you do need those intermediate data points actually to correct some of your hyperparameter optimization and other things so that the scaling law continues to be true.
Speaker 1 So there's sort of various practical limitations onto that.
Speaker 1 So, you know, kind of one order of magnitude is about probably the maximum that
Speaker 1 you want to carry on, you want to sort of do between each era.
Speaker 2 Oh, that's so fascinating.
Speaker 2 You know, in the GPT-4 technical report, they say that they were able to predict the training loss of, you know, tens of thousands of times less compute than GPT-4, that they could see the curve.
Speaker 2 But the point you're making is that the actual capabilities that loss implies may not be so currently.
Speaker 1 Yeah, the downstream capabilities sometimes don't follow from the, you can often predict the core metrics like training loss or something like that, but then it doesn't actually translate into MMLU or math or some other actual capability that you care about.
Speaker 1 They're not necessarily linear all the time. So there's sort of non-linear effects there.
Speaker 2 What was the biggest surprise to you during the development of Gemini of
Speaker 2 something like this happening?
Speaker 1 Well, I mean, I wouldn't say there was one big surprise, but it was very interesting, you know, trying to train things at that size and learning about
Speaker 1 all sorts of things from organizational how to babysit such a system and to track it. And I think things like getting a better
Speaker 1 understanding of the metrics you're optimizing versus the final capabilities that you want. I would say that's still not a perfectly understood
Speaker 1 mapping, but it's an interesting one that we're getting better and better at.
Speaker 2
Yeah, yeah. There's a perception that maybe other labs are more compute efficient than DeepMind has been with Gemini.
I don't know what you make of that for something.
Speaker 1 I don't think that's the case. I mean, you know, it's
Speaker 1
I think that actually Gemini 1 used roughly the same amount of compute, maybe slightly more than what was rumored for GPT-4. I don't know exactly what was used.
So I think it was in the same ballpark.
Speaker 1 I think we're very efficient with our compute and we use our compute for many things.
Speaker 1 One is not just the scaling, but going back to earlier to these more innovation and ideas, you've got to, you know, it's only useful, a new innovation, a new invention, if it also can scale.
Speaker 1 So in a way, you also need quite a lot of compute to do new invention because you've got to test many things at least some reasonable scale and make sure that they work at that scale.
Speaker 1 And also some new ideas may not work at a toy scale, but do work at a larger scale. And in fact, those are the more valuable ones.
Speaker 1 So you actually, if you think about that exploration process, you need quite a lot of compute to be able to do that.
Speaker 1 I mean, the good news is, is I think, you know,
Speaker 1 we're pretty lucky at Google that we, I think this year, certainly we're going to have the most compute by far of any sort of research lab.
Speaker 1 And, you know, we hope to make very efficient and good use of that in terms of both scaling and the capability of our systems and also new inventions. Yeah.
Speaker 2 What's been the biggest surprise to you, if you go back to yourself in 2010 when you were starting DeepMind in terms of what EI progress has looked like.
Speaker 2 Did you anticipate back then that it would, in some large sense, amount to spend dumping billions of dollars into these models, or did you have a different sense of what it would look like?
Speaker 1 We thought that. And actually,
Speaker 1 I know you've interviewed my colleague Shane and
Speaker 1 he always thought that in terms of
Speaker 1 compute curves and then maybe comparing it roughly to the brain and how many neurons and synapses there are very loosely.
Speaker 1 But we're actually interestingly in that kind of regime now, roughly in the right order of magnitude of number of synapses in the brain and
Speaker 1 the sort of compute that we have. But I think more fundamentally, you know, we always thought that we bet on generality and learning, right?
Speaker 1 So those were always at the core of any technique we would use. That's why we triangulated on reinforcement learning and search
Speaker 1 and deep learning, right? As three types of algorithms that would scale and
Speaker 1 would be very general and not require a lot of handcrafted human priors, which we thought was the sort of failure mode, really, of the efforts to build AI in the 90s, right?
Speaker 1 Places like MIT, where there are very, you know, logic-based systems, expert systems, you know, masses of hand-coded, hand-crafted human information going into them that turned out to be wrong or too rigid.
Speaker 1 So we wanted to move away from that. And I think we spotted that trend early and became, you know, and obviously we use games as our proving ground and we did very well with that.
Speaker 1 And I think all of that was very successful.
Speaker 1 And I think maybe inspired others to, you know, things like AlphaGo, I think was a big moment for inspiring many others to think, oh, actually, these systems are ready to scale.
Speaker 1 And then, of course, with the advent of Transformers invented by our colleagues at Google, Research and Brain, that was then
Speaker 1
the type of deep learning that allowed us to ingest. masses of amounts of information.
And that, of course, has really turbotraged where we are today. So I think that's all part of the same lineage.
Speaker 1 We couldn't have predicted every twist and turn there, but I think the general direction we were going in
Speaker 1 was the right one.
Speaker 2 Yeah.
Speaker 2 And in fact, it's like fascinating because actually, if you read your old papers or Shane's old papers, Shane's thesis, I think in 2009, he said, well, you know, the way we would test for AI is if it can you compress Wikipedia?
Speaker 2 And that's like literally the loss function of our LMs. Or like your own paper in like 2016 before Transformers, where you said, like, you were comparing neuroscience and AI.
Speaker 2 And you said, attention is what is needed.
Speaker 1
Exactly. Yeah, yeah.
Exactly. So we had these things called out.
And actually, we had some early attention papers,
Speaker 1 but they weren't as elegant as Transformers in the end, like neural Turing machines and things like this.
Speaker 1 And then Transformers was the nicer and more general architecture of that.
Speaker 2 Yeah, yeah, yeah.
Speaker 2 When you extrapolate all this out forward and you think about superhuman intelligence,
Speaker 2 what does that landscape look like to you? Is it like still controlled by a private company? What should the governance of that look like
Speaker 1 concretely? Yeah, look, I would love, you know, I think that this has to be,
Speaker 1 this is so consequential, this technology. I think it's much bigger than any one company
Speaker 1 or even industry in general. I think it has to be a big collaboration with many stakeholders from civil society, academia, government.
Speaker 1 And the good news is I think with the popularity of the recent chatbot systems and so on, I think that has woken up many of these other parts of society that this is coming and what it will be like to interact with these systems.
Speaker 1 And that's great. So it's opened up lots of doors for very good conversations.
Speaker 1 I mean, an example of that was the safety summit in the UK hosted a few months ago, which I thought was a big success to start getting this international dialogue going.
Speaker 1 And, you know, I think the whole of society needs to be involved in deciding what do we want to deploy these models for, how do we want to use them, what do we not want to use them for.
Speaker 1 I think we've got to try and get some international consensus around that.
Speaker 1
And then also making sure that the benefits of these systems benefit everyone for the good of everyone and society in general. And that's why I push so hard things like AI for science.
And I hope that
Speaker 1 with things like our spin-out isomorphic, we're going to start curing diseases, terrible diseases with AI and accelerate drug discovery, amazing things, climate change and other things.
Speaker 1 I think big challenges that face us and face humanity,
Speaker 1 massive challenges actually, which I'm optimistic we can solve because we've got this incredibly powerful tool coming along down the line of AI that we can apply and I think help us and solve many of these problems.
Speaker 1 So, you know, ideally, we would have a big
Speaker 1 consensus around that and a big discussion. you know, sort of almost like the UN level, if possible.
Speaker 2 You know, one interesting thing is if you look at these systems, you you chat with them and they're immensely powerful and intelligent.
Speaker 2 But it's interesting to the extent of which they haven't automated large sections of the economy yet.
Speaker 2 Whereas if five years ago, I showed you Gemini, you'd be like, wow, this is like, you know, totally coming for a lot of things. So, how do you account for that?
Speaker 2 Like, what's going on where it hasn't had the broader impact yet?
Speaker 1 Yeah, I think it's, we're still, I think that just shows we're still at the beginning of this new era.
Speaker 1 And I think that for these systems, I think there are some interesting use cases, you know,
Speaker 1 where you can use things to, you know, these chatbot systems to summarize stuff for you and maybe do some simple writing and maybe more kind of boilerplate type writing.
Speaker 1 But that's only a small part of what, you know, we all do every day.
Speaker 1 So I think for more general use cases, I think we still need new capabilities, things like planning and search, but also maybe things like personalization and memory, episodic memory.
Speaker 1 So not just long context windows, but actually remembering
Speaker 1 what we spoke about 100 conversations ago.
Speaker 1 And I think once those start becoming in, I mean, I'm really looking forward to things like recommendation systems that help me find better, more enriching material, whether that's books or films or music and so on.
Speaker 1 You know, I would use that type of system every day. So I think we're just scratching the surface of what these AI, say, assistants could actually do for us in our general everyday lives.
Speaker 1 And also in our work context as well. I think they're not reliable yet enough to do things like science with with them.
Speaker 1 But I think one day, you know, once we fix factuality and grounding and other things, I think they could end up becoming like, you know, the world's best research assistant for you as a scientist or
Speaker 1 as a clinician.
Speaker 2 I want to ask about memory, by the way. You had this fascinating paper in 2007 where you talked about the links between memory and imagination and how they, in some sense, are very similar.
Speaker 2 People often claim that these models are just memorizing. How do you think about that claim that people make?
Speaker 2 Is memorization memorization all you need? Because in some deep sense, that's compression.
Speaker 1 What's your intuition here? Yeah, I mean, sort of at the limit, one maybe could try and memorize everything, but it wouldn't generalize out of your distribution.
Speaker 1 And I think these systems are clearly, I think
Speaker 1 the early
Speaker 1 criticisms of these early systems were that they were just regurgitating and memorizing. But I think clearly the new era, the Gemini GPT-4 type era, they are definitely generalizing to new constructs.
Speaker 1 So, but actually,
Speaker 1 in my thesis and that paper particularly that started that area of imagination in neuroscience was showing that, you know, first of all, memory, certainly at least human memory, is a reconstructed process.
Speaker 1 It's not a videotape, right? We sort of put it together back from components that seem familiar to us, the ensemble.
Speaker 1 And that's what made me think that imagination might be the same thing, except in this case, you're using the same semantic components, but now you're putting it together into a way that your brain thinks is novel, right?
Speaker 1 For a particular purpose, like planning. And
Speaker 1 so I do think that that kind of idea idea is still probably missing from our current systems, this sort of pulling together different
Speaker 1 parts of your world model to simulate something new that then helps with your planning, which is what I would call imagination.
Speaker 2 Yeah, for sure. So again, now you guys have the best models in the world,
Speaker 2 you know, with the Gemini models.
Speaker 1 Do you plan on putting out some sort of framework like the other two major AI labs have of, you know, once we see these specific capabilities, unless we have these specific safeguards, we're not going to continue development or we're not going to ship the product out uh yes we we have actually we i mean we have already lots of internal checks and balances but we're going to start publishing actually you know sort of watch this spaces we're working on a whole bunch of um blog posts and technical papers that uh we'll be putting out in the next few months that um you know along the similar lines of things like responsible scaling laws and so on we have those uh uh implicitly internally and various safety councils and so on people like shane chair and so on um but but uh it's time for us to talk about that more publicly, I think.
Speaker 1 So we'll be doing that throughout the course of the year.
Speaker 2 That's great to hear.
Speaker 2 And another thing I'm curious about is, so it's not only the risk of like, you know, the deployed model being something that people can use to do bad things, but also rogue actors, foreign agents, so forth, being able to steal the weights and then fine-tune them to do crazy things.
Speaker 2 How do you think about securing the weights to make sure something like this doesn't happen? Making sure... a very like key group of people have access to them and so forth.
Speaker 1
Yeah, it's interesting. So first of all, there's sort of two parts to this.
One is security, one is open source, maybe we can discuss. But the security, I think, is super key.
Like just a sort of
Speaker 1 normal type of security type things. And I think we're lucky at Google DeepMind.
Speaker 1 We're kind of behind Google's firewall and cloud protection, which is, you know, I think best, you know, best in class in the world corporately. So we already have that protection.
Speaker 1 And then behind that, we have specific
Speaker 1 deep mind
Speaker 1
protections within our code base. So it's sort of a double layer of protection.
So I feel pretty good about that.
Speaker 1 That that's, I mean, you you know you can never be complacent on that but i feel it's it's it's already sort of best in the world in terms of cyber uh defenses um but we've got to carry on improving that and again things like the hardened sandboxes could be a way of doing that uh as well and and maybe even there are um you know uh specifically secure data centers or hardware solutions to this too that we're thinking about i think that maybe in the next three four five years we would also want air gaps and various other things that are known in the security community.
Speaker 1 So I think that's key. And I I think all frontier labs should be doing that because otherwise, you know, nation states and other things, rogue nation, you know, states and other dangerous actors,
Speaker 1 there would be obviously a lot of incentive for them to steal things like the weights.
Speaker 1 And then, you know, of course, open source is another interesting question, which is we're huge proponents of open source and open science.
Speaker 1 I mean, almost every, you know, we've published thousands of papers and things like AlphaFold and Transformers, of course, and AlphaGo, all of these things we put out there into the world,
Speaker 1 published and open source, many of them, GraphCast most recently, our weather prediction system. But when it comes to
Speaker 1 the core technology, the foundational technology and very general purpose, I think the question I would have is
Speaker 1 if you
Speaker 1 first sort of open source proponents, is that how does one
Speaker 1 stop bad actors, individuals or rogues, up to rogue states,
Speaker 1 taking those same open source systems and repurposing them because they're general purpose for harmful ends, right? So we have to answer that question.
Speaker 1 And I haven't heard a compelling, I mean, I don't know what the answer is to that, but I haven't heard a compelling, clear answer to that from
Speaker 1
proponents of just sort of open sourcing everything. So I think there has to be some balance there, but obviously it's a complex question of to what that is.
Yeah, yeah.
Speaker 2 I feel like tech doesn't get the credit it deserves for like funding, you know, hundreds of billions of dollars worth of R ⁇ D.
Speaker 2 And, you know, obviously you have deep buying with systems like AlphaFold and so on.
Speaker 2 But when we talk about securing the way it's, as we said, maybe right now it's not something that is going to cause the end of the world or anything.
Speaker 2 But as these systems get better and better, the worry that some foreign agent or something gets access to them, presumably right now, there's dozens to hundreds of researchers who have access to the weights.
Speaker 2 How do you, what's a plan for getting into the situation, getting the weights in that situation room. So if you're like, if you need to access to them, it's like some extremely strenuous process.
Speaker 2 Nobody individual can really take them out.
Speaker 1
Yeah, yeah. I mean, one has to balance that with allowing for collaboration and speed of progress.
Actually, another interesting thing is, of course, you want,
Speaker 1 you know, brilliant, independent researchers from academia or things like the UK AI Safety Institute and US One to be able to
Speaker 1 kind of red team these systems. So one has to expose them to a certain extent, although that's not necessarily the weights.
Speaker 1 And then, you know, we have a lot of processes in place about making sure that, you know, only if you need them that you have access to, you know, those people who need access have access.
Speaker 1 And right now, I think we're still in the early days of those kinds of systems being at risk. And as these systems become more powerful and more general and more capable,
Speaker 1 I think one has to look at the access question.
Speaker 2 So, some of these other labs have specialized in different things relative to safety, like Anthropic, for example, with interpretability. And
Speaker 2 do you have some sense of where you guys might have an edge?
Speaker 2 As so that, you know, now that you have the frontier model, you're going to scale up safety, where you guys are going to be able to put out the best frontier research on safety?
Speaker 1 I think, you know, well, we helped pioneer RLHF and other things like that, which can also be obviously used for performance, but also for safety.
Speaker 1 I think that,
Speaker 1 you know, a lot of the self-play ideas and these kinds of things could also be used potentially to auto-test a lot of
Speaker 1 the boundary conditions that you have with the new systems.
Speaker 1 I mean, part of the issue is that with these sort of very general systems, there's so much surface area to cover, like about how these systems behave.
Speaker 1 So I think we are going to need some automated testing. And again, with things like simulations and games, very realistic environments,
Speaker 1 virtual environments, I think we have a long history in that and using those kinds of systems and making use of them for building AI algorithms. So I think we can leverage all of that history.
Speaker 1 And then around a Google, we're very lucky we have some of the world's best cybersecurity experts, hardware designers. So I think we can bring that to bear
Speaker 1 for security and safety as well.
Speaker 2
Great, great. Let's talk about Gemini.
Yeah. So
Speaker 2 now you guys have the best model in the world.
Speaker 2 So
Speaker 2 I'm curious, you know, the default way to interact with these systems has been through chat
Speaker 2 so far. Now that we have multimodal and all these new capabilities, how do you anticipate that changing? Or do you think that'll still be the case?
Speaker 1 Yeah, I think we're just at the beginning of actually understanding what a full multimodal model system,
Speaker 1 how exciting that might be to interact with. And
Speaker 1 it'll be quite different to, I think, what we're used to today with the chatbots. I think
Speaker 1 the next versions of this over in the next year, 18 months, you know, maybe we'll have some contextual understanding around the environment around you through a camera or whatever it is, a phone.
Speaker 1 You know, I could imagine that as the next autumn glasses or the next step.
Speaker 1 And then I think that we'll start becoming more fluid in understanding, oh,
Speaker 1 let's sample from a video, let's use voice,
Speaker 1 maybe even eventually things like touch. And, you know, if you think about robotics and other things, you know, sensors, other types of sensors.
Speaker 1 So I think the world's about to become very exciting, I think, in the next few years as we start getting used to the idea of what true multimodality means.
Speaker 2 On the robotic subject, Ilya said when he was on the podcast that the reason OpenAI gave up on robotics was because they didn't have enough data in that domain, at least at the time they were pursuing it.
Speaker 2 I mean, you guys have put out different things like Robotransformer and other things.
Speaker 2 Do you think that's still a bottleneck for robotics progress, or will we see progress in the world of atoms as well as the world?
Speaker 1 Yeah, well we're very excited about our progress with things like Gatto and
Speaker 1 RT2 robotic transformer. And we actually think,
Speaker 1 so we've always liked robotics and we've had amazing research and we still have that going now because we like the fact that it's a data poor regime because that pushes us on very interesting research directions that we think are going to be useful anyway, like sampling efficiency and data efficiency in general and transfer learning, learning from simulation, transferring that to reality, all of these very, you know, sim to real, all of these very interesting,
Speaker 1 actually general challenges that we would like to solve.
Speaker 1
So the control problem. So we've always pushed hard on that.
And actually, I think,
Speaker 1 so Ilya's right, that that is more challenging because of the data problem.
Speaker 1 But it's also, I think, we're starting to see the beginnings of these large models being transferable to the robotics regime, learning in the general domain, language domain, and other things.
Speaker 1 And then just treating tokens like Gatto as any type of token. You know, the token could be an action, it could be a word, it could be part of an image, a pixel, or whatever it is.
Speaker 1 And that's what I think true multimodality is. And to begin with, it's harder to train a system like that than a straightforward text language system.
Speaker 1 But actually,
Speaker 1 going back to our early conversation of transfer learning, you start seeing that a true multimodal system, the other modalities benefit some different modality.
Speaker 1 So you get better at language because you now understand a little bit about video so um i do think uh it's harder to get going but actually ultimately um we'll have a more general more capable system like that uh whatever happened to gato like that was super fascinating that you could have like play games and also do like video and also do yeah we're still we're still working on those kinds of systems but you can imagine we're just trying to uh those ideas we're trying to build into our uh future generations of gemini okay you know to be able to do all of those things and and and robotics transformers and you know things like that are kind of you can think of them as sort of follow-ups to that.
Speaker 2 Will we see asymmetric progress towards the domains in which the self-play kinds of things you're talking about will be especially powerful?
Speaker 2 So, math and code, you know, obviously, recently you have these papers out about this,
Speaker 2 where, yeah, you can use these things to do
Speaker 2 really cool novel things.
Speaker 2 Will they just be like superhuman coders, but like in other ways, they might be still worse than humans, or how do you think about that sort of thing?
Speaker 1 Yeah, so look, I think that
Speaker 1 we're making great progress with math and
Speaker 1 things like theorem proving and coding.
Speaker 1 But it's still interesting.
Speaker 1 You know, if one looks at, I mean, creativity in general and scientific endeavor in general, I think we're getting to the stage where our systems could help the best human scientists make their breakthroughs quicker, like almost triage the search space in some ways, or perhaps find a solution like AlphaFold does with a protein structure.
Speaker 1 But it can't, they're not at the level where they can create a hypothesis themselves or ask the right question.
Speaker 1 And as any top scientist will tell you, that's the hardest part of science: actually asking the right question, boiling down that space to like, what's the critical question we should go after, the critical problem, and then formulating that problem in the right way to attack it.
Speaker 1 And that's not something our systems, well, we have really have any idea how our systems could do, but they can, they are suitable for searching large combinatorial spaces if one can specify the problem in that way with a clear objective function.
Speaker 1 So that's very useful for already many of the problems we deal with today, but not the most high-level creative problems.
Speaker 2 So DeepMind obviously has published all kinds of interesting stuff in the speeding of science in different areas.
Speaker 2 How do you think about that in the context of if you think AGI is going to happen in the next 10, 20 years, why not just wait for the AGI to do it for you?
Speaker 2 Why build these domain-specific solutions?
Speaker 1 Well, I think
Speaker 1 we don't know how long AGI is going to be. And we always used to say, you know, back even when we started DeepMind that
Speaker 1 we don't have to wait for AGI in order to bring incredible benefits to the world.
Speaker 1 And
Speaker 1 especially, you know, my personal passion has been AI for science and health.
Speaker 1 And you can see that with things like AlphaFold and all of our various nature papers of different domains, our material science work and so on. I think there's lots of exciting directions.
Speaker 1 And also impact in the world through products, too. I think it's very exciting and a huge opportunity, unique opportunity we have as part of Google of
Speaker 1 the, you know,
Speaker 1 they've got dozens of billion user products, right, that we can immediately ship our advances into.
Speaker 1 And then billions of people can, you know, improves their daily lives, right, and enriches their daily lives and enhances their daily lives.
Speaker 1 So I think it's a fantastic opportunity for impact on all those fronts. And I think the other reason from a point of view of AGI specifically is that it battle tests your ideas, right?
Speaker 1 So you don't want to to be in a sort of research bunker where you just, you know, theoretically are pushing things, some things forward, but then actually your internal metrics start deviating from
Speaker 1 real world things that would people would care about, right? Or real world impact.
Speaker 1 So you get a lot of feedback, direct feedback from these real world applications that then tells you whether your systems really are scaling or actually is, you know, do we need to be more data efficient or sample efficient?
Speaker 1 Because most real world challenges require that right and so it kind of keeps you honest and um pushes you you know keep sort of nudging and steering your research directions to make sure they're on the right path so i think it's fantastic and of course the world benefits from that society benefits from that on the way many many maybe many many years before agi arrives yeah um well the the development of gemini is super interesting because it comes right at the heels of merging these uh different organizations uh brain and deep mind um yeah i'm curious what what have been the challenges there what have been the synergies uh And it's been successful in the sense that you have the best model in the world now.
Speaker 1 What's it been like? Well, look, it's been fantastic actually over the last year. Of course, it's been challenging to do that, like any big integration coming together.
Speaker 1 But you're talking about two world-class organizations,
Speaker 1 long-storied histories of inventing many, many important things,
Speaker 1
from deep reinforcement learning to transformers. And so it's very exciting actually pulling all of that together and collaborating much more closely.
We always used to be collaborating, but more
Speaker 1 on a sort of project-by-project basis versus a much deeper, broader collaboration like we have now.
Speaker 1 And Gemini is the first fruit of that collaboration, including the name Gemini, actually, implying twins.
Speaker 1 And of course, a lot of other things are made more efficient, like pooling compute resources together and ideas and engineering, which I think at the stage we're at now where there's huge amounts of world-class engineering that has to go on to build the frontier systems.
Speaker 1 I think it makes sense to coordinate that that more closely. Yeah.
Speaker 2 So, I mean, you and Shane started DeepMind partly because you were concerned about safety.
Speaker 2 You saw AGI coming as like a live possibility.
Speaker 2 Do you think the people who were formerly part of Brain, the half of Google, DeepMind now, do you think they approach it in the same way?
Speaker 2 Have there been cultural differences there in terms of that question?
Speaker 1 Yeah, no, I think overall, and this is why, you know, I think one of the reasons we joined forces with Google back in 2014 is I think the entirety of Google and Alphabet, not just Brain and DeepMind, take these questions very seriously of responsibility.
Speaker 1 And, you know, our kind of mantra is to try and be bold and responsible with these systems. So, you know,
Speaker 1 I would class it as I'm obviously a huge techno-optimist, but I want us to be cautious with that, given the transformative power of what we're bringing into the world, you know, collectively.
Speaker 1 And I think it's important, you know, I'm getting somebody one of the most important technologies humanity will ever invent.
Speaker 1 So we've got to put, you know, all our efforts into getting this right and to be thoughtful and sort of also humble about what we know and don't know about the systems that are coming and the uncertainties around that.
Speaker 1 And in my view,
Speaker 1 the only sensible approach when you have huge uncertainty is to be sort of cautiously optimistic and use the scientific method to try and have as much foresight and understanding about what's coming down the line and the consequences of that before it happens.
Speaker 1 You know, you don't want to be live A-B testing out in the world with these very consequential systems because unintended consequences may be quite severe. So,
Speaker 1 you know, I want us to move away
Speaker 1 as a field from a sort of move fast and break things attitude, which has, you know, maybe served the valley very well in the past and obviously created
Speaker 1 important innovations.
Speaker 1 But I think in this case, you know, we want to be
Speaker 1 bold with the positive things that it can do and make sure we realize things like medicine and science and advancing all of those things whilst being
Speaker 1 responsible and thoughtful with
Speaker 1 as far as possible with mitigating the risks.
Speaker 2
Yeah, yeah. And that's why it seems like the responsible scaling policies or something like that is a very good empirical way to pre-commit to these kinds of things.
Yes, exactly. Yeah.
Speaker 2 And I'm curious if you have a sense of, like, for example, when you're doing these evaluations, if it turns out your next model could help a layperson build a pandemic class with bioweapon or something,
Speaker 2 how you would think, first of all, of making sure those weights are secure so that that doesn't get out? And second, what would have to be true for you to be comfortable deploying that system?
Speaker 2 How comfortable, like, how would you make sure that that latent capability isn't exposed?
Speaker 1 Yeah, well, first, I mean, you know, the secure model part, I think we've covered with the cybersecurity and make sure that's well classed and you're monitoring all those things. I think.
Speaker 1 If a capability was discovered like that through red teaming or external testing by, you know,
Speaker 1 government institutes or academia or whatever, independent testers, then we would have to fix that loophole, depending what it was, right? If that required more
Speaker 1 a different kind of perhaps constitution or different guardrails or more RLHF to avoid that or removing some training data.
Speaker 1 I mean, depending on what the problem is, I think there could be a number of mitigations. And so the first part is making sure you detect it ahead of time.
Speaker 1
So that's about the right evaluations and right benchmarking and right testing. And then the question is how one would fix that before you deployed it.
Sure, sure.
Speaker 1 But I think it would need to be fixed before it was deployed generally, for sure, if that was an exposure service.
Speaker 2 Right, right.
Speaker 2 Final question.
Speaker 2 You know, you've been thinking in terms of like the end goal of AGI at a time when other people thought it was ridiculous in 2010.
Speaker 2 Now that we're seeing this like slow takeoff where we're actually seeing these like generalization and intelligence,
Speaker 2 what is like the psychologically seeing this, what has that been like? Has it just been like sort of priced into your world model? So you like, it's not new news for you?
Speaker 2 Or is it like actually just seeing it live? You're like, wow, like
Speaker 2 something's really changed or what does it feel like?
Speaker 1 Yeah, well, for me,
Speaker 1 yes, it's already priced into my world model of how things were going to go, at least from the technology side. But obviously,
Speaker 1 we didn't necessarily anticipate the general public would be that interested this early in the sequence, right, of things.
Speaker 1 Like maybe one could think of if we were to produce more, if say like a chat GPT and chatbots hadn't got the kind of got the interest they had ended up getting, which I think was quite surprising to everyone that people were ready to use these things, even though they were lacking in certain directions, right?
Speaker 1 Impressive though they are,
Speaker 1 then we would have produced more specialized systems, I think, built off of the main track, like AlphaFolds and AlphaGo's and so on and our scientific work. And then
Speaker 1 I think
Speaker 1 the general public maybe
Speaker 1 would have only paid attention later down the road where in a few years' time where we have more generally useful assistant type systems. So that's been interesting.
Speaker 1 So that's created a different type of environment that we're now all operating in
Speaker 1 as a field.
Speaker 1 So and it's a little bit more chaotic because there's so many more things going on and there's so much VC money going into it and everyone's sort of almost losing their minds over it, I think. And
Speaker 1 what I just, the only thing I worry about is I want to make sure that as a field, we act responsibly and thoughtfully and scientifically about this and use the scientific method to approach this in a,
Speaker 1
as I said, an optimistic but careful way. And I think that's the, I've always believed that's the right approach for something like AI.
And I just hope that doesn't get lost in this huge rush.
Speaker 2
Sure, sure. Well, I think that's a great place to close.
Demon, so much. Thanks to you.
Thank you so much for your time and for coming on the podcast.
Speaker 1 Thanks. It's been a real pleasure.
Speaker 2
Hey, everybody. I hope you enjoyed that episode.
As always, the most helpful thing you can do is to share the podcast. Send it to people you think might enjoy it.
Speaker 2
Put it in Twitter, your group chats, et cetera. It just splits the world.
I appreciate you listening. I'll see you next time.
Cheers.