National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DanHendrycks
Show Notes:
0:00 Introduction
0:36 Dan’s path to focusing on AI Safety
1:25 Safety efforts in large labs
3:12 Distinguishing alignment and safety
4:48 AI’s impact on national security
9:59 How might AI be weaponized?
14:43 Immigration policies for AI talent
17:50 Mutually assured AI malfunction
22:54 Policy suggestions for current administration
25:34 Compute security
30:37 Current state of evals
Press play and read along
Transcript
Speaker 1 Hi, listeners, and welcome back to No Priors. Today, I'm with Dan Hendricks, AI researcher and director of the Center for AI Safety.
Speaker 1 He's published papers and widely used evals, such as MMLU, and most recently, Humanity's Last Exam.
Speaker 1 He's also published Superintelligence Strategy, alongside authors including former Google CEO Eric Schmidt and Scale founder Alex Wang.
Speaker 1 We talk about AI safety and geopolitical implications, analogies to nuclear, compute security, and the state of evals. Dan, thanks for doing this.
Speaker 2 Glad to be here.
Speaker 1 How did you end up working on AI safety?
Speaker 2 AI was pretty clearly going to be a big deal if one would just think through its conclusion. So early on,
Speaker 2 it seemed like other people were ignoring it. because it was weirder or not that pleasant to think about.
Speaker 2 It's hard to wrap your head around, but it seemed like the most important thing during the century. So I thought thought that that would be a good place to devote my career toward.
Speaker 2 And so that's why I started on it early on. And then since it'd be such a big deal,
Speaker 2 we need to make sure that we can think about it properly, channel it in a productive direction, and take care of some sort of tail risks, which will generally systematically underaddressed. So
Speaker 2 that's why I got into it. It's a big deal, and people weren't really doing much about it at the time.
Speaker 1 And what do you think of as the center's role versus safety efforts within the large labs?
Speaker 2 Well, there aren't that many safety efforts in the labs even now. I mean, I think the labs can just focus on doing some very basic measures to refuse
Speaker 2
queries related to like, help me make a virus and things like that. But I don't think labs have an extremely large role in safety overall or making this go well.
They're kind of predetermined to race.
Speaker 2 They can't really choose not to unless they would no longer be a relevant company in the arena. I think they can reduce like terrorism risks or some like accidents.
Speaker 2 But beyond that, I don't think they can dramatically change the outcomes in too substantial of a way. They could,
Speaker 2 because a lot of this is geopolitically determined. If companies decide to act very differently,
Speaker 2
there's the prospect of competing with China or maybe Russia will become relevant later. And as that happens, this constrains their behavior substantially.
So
Speaker 2 I've been interested in tackling AI at multiple levels. There's things companies can do to have some very basic anti-terrorism safeguards, which are pretty easy to implement.
Speaker 2 There's also the economic effects that will need to be managed well, and companies can't really change how that goes either.
Speaker 2 It's going to cause mass disruptions to labor and automate a lot of digital labor. If they tinker the design choice or add some different refusal data, it doesn't change that fact.
Speaker 2
Safety are making AI go well. And the risk management is just much more of a broader problem.
It's got some technical aspects, but I think that's a small part of it.
Speaker 1 I don't know that the leaders of the labs would say, like, we can do nothing about this, but maybe it's also a question of, you know, everybody also has like equity in this equation, right?
Speaker 1 Maybe it's also a question of semantics. Like, can you describe how you think of the difference between like alignment and safety as you think about it?
Speaker 2 I'm just using safety as a sort of catch-all for like dealing with risks. There are other risks, like if you never get
Speaker 2 really intelligent AI systems, that poses some risks in itself.
Speaker 2 There's other sorts of risks that don't run, that are not as necessarily technical, like concentration of power. So, I view the distinction between alignment and safety
Speaker 2 as alignment as being a sort of subset of safety. Obviously, you want the value systems of the AIs to be in keeping with or compatible with,
Speaker 2
say, the U.S. public for U.S.
AIs or for you as an individual. But that doesn't make it necessarily safe.
Speaker 2
If you have an AI that's reliably obedient or aligned to you, this doesn't make everything work totally well. China can have AIs that are totally aligned with them.
The U.S.
Speaker 2
can have AIs that are totally aligned with them. You still are going to have a strategic competition between the two.
This is going to,
Speaker 2 they're going to need to integrate it in their militaries. They're probably going to need to integrate it really quickly.
Speaker 2 This competition is going to force them to have a higher risk tolerance in the process.
Speaker 2 So, even if the AIs are doing their principles as biddings reliably, this doesn't necessarily make the overall situation perfectly fine.
Speaker 2 I think it's not just a question of reliability or whether they do what you want. There are other structural pressures that cause this to be riskier, like the geopolitics.
Speaker 1 At the highest level, like fundle of weights, increasingly capable. Like, why do we care about AI from a national security perspective?
Speaker 1 Like, what's the most practical way it matters in geopolitics or gets used as a a weapon?
Speaker 2
I think that AI isn't that powerful currently in many respects. So, in many ways, it's not actually that relevant for national security currently.
This could well change within a year's time.
Speaker 2 I think generally, I've been focused on the trajectory that it's on, as opposed to saying right now it is extremely concerning. That said, there are some specific, for instance, for cyber.
Speaker 2 I don't think AIs are that relevant for being able to pull off a devastating cyber attack on the grid by a malicious actor currently.
Speaker 2 That said, we should look at cyber and be prepared and think about its strategic implications. There are other capabilities like Virology.
Speaker 2 The AIs are getting very good at STEM, PhD-level types of topics, and that includes Virology.
Speaker 2 So I think that they are sort of rounding the corner on being able to provide expert-level capabilities in terms of their knowledge of the literature or even helping in practical wet lab situations.
Speaker 2 So, I do think on the virology aspect,
Speaker 2 they do have already national security implications, but that's only very recently with the reasoning models.
Speaker 2 But in many other respects, they're not as relevant.
Speaker 2 It's more prospective that it could well become the way in which a nation might try and dominate another nation and the backbone for not just war, but also just economic security.
Speaker 2
The amount of chips that the U.S. has versus China might be the determiner or the determinant of which country is the most prosperous and which one falls behind.
So, but this is all prospective.
Speaker 2 I don't think it's just speculative. It's speculative in the same way that like NVIDIA's valuation is speculative or the valuations behind AI companies is speculative.
Speaker 2 It's something that I think a lot of people are expecting and expecting fairly soon.
Speaker 1 Yeah, it's quite hard to think about time horizons in AI. We invest in things that I think of it as like medium-term speculative, but they get pulled in quite quickly.
Speaker 1 You know, and just because you mentioned both cyber and bio,
Speaker 1 we're investors in companies like
Speaker 1 Culminate or Sybil on the defensive cybersecurity side or Chai and Somite on the biotech discovery side, or, you know, modeling different
Speaker 1 systems in biology that will help us with treatments. How do you think about the balance of like competition and benefits and safety?
Speaker 1 Because some of these things I think are, you know, we think they're working effectively in the near term on the positive side as well.
Speaker 2 Yeah, I mean, I don't get this
Speaker 2 big trade-off between safety and you're just taking care of a few tail risks. For bio, which if you want to expose those capabilities, just like talk to sales, get the enterprise account.
Speaker 2 Here you can have the little refusal thing for Virology.
Speaker 2 But if you just create an account a second ago and you're asking it how to culture this virus and here's your picture of your Petri edition, what's the next step that you should do?
Speaker 2 That, yeah, if you want to access those capabilities, you can speak to sales. That's basically
Speaker 2
in XAI's risk management framework. It's just we're not exposing those expert-level capabilities to people who we don't know who they are.
But if we do, then sure have them.
Speaker 2 So I think you can, and likewise with cyber, I think you can just very easily capture the benefits while taking care of some of these pretty avoidable tail risks.
Speaker 2 But then once you have that, you've basically taken care of malicious use for the models behind your API. And that's about the best that you can do as a company.
Speaker 2 You could, you know, try and influence policy by using your voice or something,
Speaker 2 but I don't see a substantial amount that they could do.
Speaker 2 They could do some research for trying to make the models more controllable or try and make policymakers be more aware of the situation more broadly in terms of where we're going.
Speaker 2 Because I don't think policymakers have internalized what's happening in AI at all. They still think it's like a...
Speaker 2 uh they're just selling hype and they don't actually believe or the companies at the employees don't actually believe that uh this stuff could um we you know we could could get EGI and so to speak in the in the next few years so I don't know I don't see like really substantial trade-offs there I see much more
Speaker 2 I think that the complications really come about when we're dealing with like what's the right stringency in export controls for instance that's that's complicated
Speaker 2 if you turn the pain dial all the way up for China in export controls
Speaker 2 and if AI chips are the currency of economic power in the future then this increases the probability that they want to invade Taiwan. They already want to.
Speaker 2 This would give them all the more reason if AI chips are the main thing and they're not getting any of it and they're not even getting the latest semiconductor manufacturing tools for even making cutting-edge CPUs, let alone GPUs.
Speaker 2 So those are some other types of complicated
Speaker 2 problems that we have to address and think about and calibrate appropriately.
Speaker 2 But in terms of just mitigating virology stuff, just speak to sales if you're Genentech or a bio startup and then you have access to those capabilities problems solved.
Speaker 1 What is a way you actually expect that AI gets used as a weapon beyond virology and security? Yeah.
Speaker 2 I wouldn't expect
Speaker 2 a bio-weapon from a state actor, from a non-state actor.
Speaker 2 That would make a lot more sense.
Speaker 2 I think cyber makes sense from state actors and both non-state actors.
Speaker 2
Then there's drone applications. These could disrupt other things.
These could help with other types of weapons research, like help explore exotic EMPs,
Speaker 2 could help create better types of drones, could substantially help with situational awareness
Speaker 2 so that one might know where all the nuclear submarines are.
Speaker 2 Some advancement in AI might be able to help with that, and that could disrupt our second strike capabilities and mutual assured destruction. So, those are some geopolitical implications.
Speaker 2
It could potentially bear on nuclear deterrence. And that's not even a weapon.
The example of just heightened situational awareness and being able to pinpoint where hardened
Speaker 2 nuclear launches are or where nuclear submarines are
Speaker 2 is just informational, but could nonetheless be extremely disruptive or destabilizing. Outside of that, the default conventional AI weapon would be drones,
Speaker 2 which is,
Speaker 2 I don't know, that makes sense
Speaker 2
that countries would compete on that. And I think that it would be a mistake if the U.S.
weren't trying to do more in manufacturing drones.
Speaker 1
Yeah. I started working recently with an electronic warfare company.
I think there's a massive under
Speaker 1
lack of understanding of just like the basic concept of, you know, we have autonomous systems. They all have communication systems.
Our missile systems have targeting communication systems.
Speaker 1 And from a battlefield awareness and control perspective, like a lot of that fought will be won with radio and radar and related systems. Right.
Speaker 1 And so I think there's an area where AI is going to be very relevant and is already very relevant in Ukraine.
Speaker 2 Speaking about
Speaker 2 AI is assisting with like command and control, I mean,
Speaker 2 I remember was hearing some story about how on Wall Street, humans used to not be able to,
Speaker 2 you always had a human in the loop for each decision. So at a later stage, before they removed that requirement on Wall Street, you just had
Speaker 2 rows of people just clicking the accept, accept, accept button.
Speaker 2 And we're kind of getting to a similar state in some contexts with
Speaker 2 AI. It wouldn't surprise me if we'd end up automating some more of that decision making.
Speaker 2 But so this just turns into questions of reliability and doing some doing some reliability research seems useful. To return to that larger question of
Speaker 2 where the sort of safety trade-offs, I think it's people are largely thinking that the push for risk management is to do some sort of pausing or something like that.
Speaker 2 An issue is you need teeth behind an agreement. If you do it voluntarily, you just make yourself less powerful and you let the worst actors get ahead of you.
Speaker 2 You could say, well, we'll sign a treaty.
Speaker 2
We will not assume that the treaty will be followed. Like that would be very imprudent.
You would actually need some sort of threat of force or something to back it up, some verification mechanism.
Speaker 2 But absent that, if it's entirely voluntary, then this doesn't seem like a useful thing at all. So I think people's conflation of safety,
Speaker 2 what we must do is we must must voluntarily slow it down. It just doesn't make as much geopolitical sense unless you have
Speaker 2 some threat of force to back it up or some very strong verification mechanism.
Speaker 2 But absent that.
Speaker 1 As a proxy, there's clearly been very little compliance to either treaties or norms around cyber attacks and around corporate espionage, right?
Speaker 2
Yeah. I mean, corporate espionage, for instance.
That was one strategy, the sort of voluntary pause strategy, we think that equals safety.
Speaker 2 And then maybe last last year there was that paper, Situational Awareness, for people written by Leopold Ashenbrenner, and he's a sort of a safety person.
Speaker 2 So, his idea was, let's instead try and beat China to superintelligence as much as possible.
Speaker 2 But that is some sort of weaknesses because, like, it assumes that corporate espionage will not be a thing at all,
Speaker 2
which is very difficult to do. I mean, we have, you know, some places, you know, 30% plus of the employees at these top AI companies are like Chinese nationals.
I mean, this is not feasible.
Speaker 2 If you're going to to get rid of them, they're going to go to China and then they're probably going to beat you because they're extremely important for the U.S.'s success.
Speaker 2 So you're going to want to keep them here. But that's going to expose you to some information security types of issues, but that's just too bad.
Speaker 1 Do you have a point of view on how we should change immigration policy, if at all, given these risks?
Speaker 2 So I would, of course, claim that this is the policy on this to be totally separate from southern border policy and other and broader policy.
Speaker 2 But if we're talking about AI researchers, if they're very talented, then I think you'd want to make it easier. And I think that it's probably too difficult for many of them to stay currently.
Speaker 2 And I think that that discussion should be kept totally separate from Southern Warden policy.
Speaker 1 Just in terms of broad strokes, like things that you think won't work, voluntary compliance and assuming that'll happen, or just straight race.
Speaker 2 So we want to be competitive. And I think racing in other sorts of spheres, say drones or AI chips, seems fine.
Speaker 2 If you're saying, let's race a superintelligence to try and get and turn that into a weapon to crush them, and they're not going to do the same or they're not going to have access to it or they're not going to prevent that from happening that seems like quite a tall claim i mean if if um
Speaker 2 we did have a substantially better ai they could just co-opt it they could just steal it um unless you had really really strong in information security like you you you move the ai researchers out to the desert but then you're reducing your probability of actually beating them because a lot of your best scientists ended up going um going to um back to china china even then if there were signs that they were really pulling ahead and going to be able to get some powerful ai that will crush that will enable china or that would enable the us to crush china they would then try to deter them from doing something like that they're not going to sit idly by and say you know what yeah go ahead develop your develop your super intelligence or whatever and then you can boss us around and we'll just accept your dictates until the end of time so that that i i think that there is kind of a a failure of some sort of second order reasoning going on there which is well how would china respond to this sort of maneuver if we're building a trillion-dollar compute cluster in the desert,
Speaker 2 totally visible from space?
Speaker 2 And it's basically the only plausible read on this is that this is a bid for dominance or a sort of monopoly on superintelligence.
Speaker 2 So I think it's,
Speaker 2 it reminds me of
Speaker 2 in the nuclear era, there's a brief period where some people were saying, you know what, we got to just like preemptively destroy or preventively destroy the USSR. We got to nuke them.
Speaker 2 Even pacifists or people who are normally pacifists like Bertrand Russell were advocating for this.
Speaker 2 The opportunity window for that was like maybe didn't ever exist, but there is a prospect of it for some time. But I don't think that the opportunity window really exists here because of the complex
Speaker 2 independence and the multinational talent dependence in the United States.
Speaker 2 But I don't think you can have China be totally severed from any awareness or any ability to gain insight or imitate what we're doing here.
Speaker 1 We're clearly nowhere close to that as a
Speaker 1 environment right now, right?
Speaker 2
No, it would take years. It would take years to do well.
And I don't even think the timelines for some very powerful AI systems, there might not even be enough time to do that securitization anyway.
Speaker 1 So, okay, in reaction, you propose, along with some other esteemed authors and friends, Eric Schmidt and Alex Wang, a new deterrence regime, mutually assured AI malfunction.
Speaker 1
I think that's the right name. MAME, a bit of a scary acronym, and also a nod to mutually assured destruction.
Can you explain MAME in plain language?
Speaker 2 Let's think of what happened in nuclear strategy. Basically,
Speaker 2 a lot of states deterred each other from doing a first strike because they could then retaliate. So they had a shared vulnerability.
Speaker 2 So they were, we're not going to do this really aggressive action of trying to make a bid to wipe you out because that will end up causing us to be damaged.
Speaker 2 And we have a somewhat similar situation later on when AI is more salient, when it is viewed as pivotal to the future of a nation.
Speaker 2 When people are on the verge of making a super intelligence more, when they can say automate pretty much all AI research, I think states would try to deter each other from trying to leverage that to
Speaker 2 develop it into something like a super weapon that would allow the other countries to be crushed or use those AIs to do
Speaker 2 some really rapid, automated AI research and development loop that could have it bootstrapped from its current levels to something that's super intelligent, vastly more capable than any other system out there.
Speaker 2
I think that later on, it becomes so destabilizing that China just says, we're going to do something preemptive, like do a cyber attack on your data center. And the U.S.
might do that to China.
Speaker 2 And Russia, coming out of Ukraine, will reassess the situation, see, um, get situational awareness, think, oh, what's going on with the U.S. and China?
Speaker 2 Oh, my goodness, they're so head on AI, AI is looking like a big deal. Let's say it's later in the year when you know a big chunk of software engineering is starting to be impacted by AI.
Speaker 2 Uh, oh, wow, this is looking pretty relevant.
Speaker 2 Hey, if you try and use this to crush us, we will prevent that by doing a cyber attack on you, and we will keep tabs on your projects because it's pretty easy for them to do that espionage.
Speaker 2 All they need to do is do a zero day on Slack, and then they can know what DeepMind is up to in very high fidelity, and OpenAI, and XAI, and others.
Speaker 2 So
Speaker 2 it's pretty easy for them to do espionage and sabotage. Right now,
Speaker 2
they wouldn't be threatening that because it's not at the level of severity. It's not actually that potentially destabilizing.
It's still too distant, the capabilities.
Speaker 2 A lot of decision makers still aren't taking this AI stuff that seriously, relatively speaking. But I think that'll change as it gets more powerful.
Speaker 2 And then I think that this is how they would end up responding.
Speaker 2 And this makes us not wind up in a situation where we are doing something extremely destabilizing, like trying to create some weapon that enables one country to totally wipe out the other,
Speaker 2 as was proposed by people like Leo.
Speaker 1 What are the parallels here that you think make sense to nuclear and don't?
Speaker 2 I think that more broadly, just a dual-use technology, dual-use to beat us, civilian applications, it has military applications,
Speaker 2 its economic applications are still, you know, in some ways limited, and likewise, its military applications are still
Speaker 2
limited. But I think that will keep changing rapidly.
Like chemical, it was important for the economy.
Speaker 2 It had some military use, but
Speaker 2
they kind of coordinated not to go down the chemical route. And bio as well can be used as a weapon and has enormous economic applications.
And likewise with nuclear, too.
Speaker 2 So I think it has some of those properties for each of those technologies. Countries did eventually coordinate to
Speaker 2 make sure that it didn't wind up in the hands of rogue actors like terrorists.
Speaker 2 There have been a lot of efforts taken to make sure it doesn't, that rogue actors don't get access to it and use it against them because it's in neither of their interests.
Speaker 2 Basically, like bioweapons, for instance, and chemical weapons are a poor man's atom bomb. And this is why we have the Chemical Weapons Convention and Bioweapons Convention.
Speaker 2 That's where there's some shared interests. So they might be rivals in other senses, in the way that the U.S.
Speaker 2 and the Soviet Union were rivals, but there's still coordination on that because it was incentive compatible.
Speaker 2 It doesn't benefit them in any way if terrorists have access to these sorts of things.
Speaker 2 It's just inherently destabilizing. So I think that's an opportunity for
Speaker 2 coordination.
Speaker 2 That isn't to say that they have an incentive to both pause all forms of AI development, but it may mean that they would be deterred from some particular forms of AI development, in particular ones that have a very plausible prospect of enabling one country to get a decisive edge over another and crush them.
Speaker 2 So, no like super weapon type of stuff, but more conventional type of warfare, like drones and things like that, I expect that they'll continue to race and probably not maybe not even coordinate on anything like that.
Speaker 2
But that's just how things will go. That's just, you know, bows and arrows and nuclear.
It just made sense for them to develop those sorts of weapons and threaten each other with them.
Speaker 1 If you all could propose a magical adoption tactically of some policy or action to the current administration. What is the first step here?
Speaker 1 It is the, you know, we will not build a super weapon and we're going to be watching for other people building them too.
Speaker 2 And so I've sort of been alluding to throughout this whole conversation, like what would the companies do? Like, not that much.
Speaker 2
I mean, add some basic anti-terrorism safeguards, but I think this is like pretty technically easy. This is unlike refusal for other things.
Refusal robustness for other things is harder.
Speaker 2 Like if you're trying to get it like crimes and torts,
Speaker 2
that's harder because it's a lot messier. It overlaps with typical everyday interaction.
I think, likewise, here, the asks for states are not that challenging either.
Speaker 2 It's just a matter of them doing it. So, one would be the CIA has a cell that's doing more espionage of other states' AI programs.
Speaker 2 So that way, they have a better sense of what's going on and aren't caught by surprise.
Speaker 2 And then, secondly, maybe some part of government, like let's say CyberCom, which has a lot of cyber offensive capabilities,
Speaker 2 gets some cyber attacks ready to disable other data centers in other countries if they're looking like they're doing something, running a or creating a destabilizing AI project.
Speaker 2 That's it for the deterrence, for non-proliferation of AI chips to rogue actors in particular. I think there'd be
Speaker 2 some adjustments to export controls, in particular, just knowing where the AI chips are at.
Speaker 2 reliably.
Speaker 2 We want to know where the AI chips are at for the same reason we want to know where our fissile material is at, for the same reason that we want Russia to know where its fissile material is at.
Speaker 2 Like it's just that's just generally a good bit of information to collect. And that can be done with some very basic statescraft of having a licensing regime.
Speaker 2 And for allies, they just notify you whenever it's being shipped to a different location and they get a license exemption on that basis.
Speaker 2 And then you have enforcement officers prioritize doing some basic
Speaker 2 inspections for AI chips and use checks. And so I think like all of these are
Speaker 2 a few texts away
Speaker 2
or a basic document away. And I think that that kind of like 80-20 is a lot of it.
Of course, this is always a changing situation.
Speaker 2 Safety isn't, as I've been trying to reinforce, not really that much of a technical problem. This is more of a complex
Speaker 2
geopolitical problem with technical aspects. Later on, maybe we'll need to do more.
Maybe we will,
Speaker 2 there might be some new risk sources that we need to take care of and adjust. But I think, like, right now, I think that espionage through CIA
Speaker 2 sabotage with CyberCom, building up those capabilities, buying those options seems like that takes care of a lot of the risk.
Speaker 1 Let's talk about compute security.
Speaker 1 If we're talking about a 100,000 networked state-of-the-art chips, you can tell where that is.
Speaker 1 How does DeepSeek and the recent releases they've had factor into your view of compute security, given expert controls have clearly led to innovation toward highly compute-efficient pre-training that works on chips that China can import at what one might consider like an irrelevant scale, a much smaller scale today.
Speaker 1 It's hard for me to see directionally that training becoming less efficient, even if we, even if people want to scale it up. And so, like, does that change your view at all?
Speaker 2 No, I think it just sort of undermines other types of strategies, like this
Speaker 2 Manhattan Project type of strategy of let's move people out to the desert and do a big cluster there.
Speaker 2 And what it shows is that you can't rely as much on restricting another superpowers' capabilities, their ability to make models.
Speaker 2 So you can restrict their intent, which is what deterrence does, but I don't think you can reliably or robustly restrict their capabilities.
Speaker 2 You can restrict the capabilities of rogue actors, and that's what I would want things like compute security and export controls to facilitate with, make sure it doesn't, you know, wind up in the hands of Iran or something.
Speaker 2 China will probably keep getting some fraction of these chips, but we should basically just try and know where they're at more and we can tighten things up.
Speaker 2 But I would primarily, you can even coordinate with China to make sure that the chips aren't winding up in rogue actors' hands.
Speaker 2 I should also say that the export controls, it wasn't actually a priority among leadership at BIS, to my understanding, a substantial priority, the AI chips for some people, but for the enforcement officers, like,
Speaker 2 did any of them go to Singapore to see where these 10% of NVIDIA's chips were going?
Speaker 2 I think
Speaker 2
they would have very quickly found, oh, they were going to China. So some basic use check would have taken care of that.
I don't think this is that export controls don't work.
Speaker 2 We've done non-proliferation of lots of other things like chemical agents and fissile material. So it can be done if people care.
Speaker 2 But even so, I still think if you really tighten the export controls, you made it so that China can't get any of those chips at all.
Speaker 2 And this is one of your biggest priorities, they're just going to steal the weights anyway.
Speaker 2 I think it'll be too difficult to totally restrict their capabilities, but I think you can restrict their intent through deterrence.
Speaker 1
It also seems like either this stuff is powerful or it's not. It seems infeasible to me, given the economic opportunity, that China will say, we don't need the capability.
Yeah. Yeah.
I
Speaker 1 fail to see a version of the world where great, like leadership in another great power that believes that there is value here says we don't need that from an economic value perspective.
Speaker 2
Yeah, that's right. Yeah.
I, just, just, um, for, for a lot of these, it would be
Speaker 2 maybe it would be nicer if everything went, you know, 3x slower and maybe there'd be fewer like messups if there's like some magic button that would do that i i don't know whether that's true or not actually i don't have a position on that given the structural constraints and the competitive pressures which between these between these companies between these these these states it just makes a lot of these things infeasible um
Speaker 2 a lot of these other gestures that could be useful for risk mitigation when you consider them uh or when you think about the the structural realities of it, it just it just becomes a lot less tractable.
Speaker 2 That said, there still would be in some way some pausing or halting of development of particular projects that you could potentially lose control of, or
Speaker 2 that, if controlled, would be very destabilizing because it would enable one country to crush the other. I think people's conceptions about what risk management looks like is it's
Speaker 2 people think it's a peacenick thing or something like that.
Speaker 1 Like
Speaker 2 it's all kumbaya
Speaker 2 and
Speaker 2 we just have to
Speaker 2 ignore structural realities
Speaker 2
in operating in this space. I think instead the right approach toward this is that it's sort of like nuclear strategy, like it is an evolving situation.
It depends.
Speaker 2 There's some basic things you can do, like you're probably going to need to stockpile nuclear weapons. You're going to need to secure a second strike.
Speaker 2 You're going to need to keep an eye on what they're doing. You're going to need to make sure that there isn't proliferation of rogue actors when the capabilities are extremely hazardous.
Speaker 2 And this is a continual battle, but it's
Speaker 2 it's not going to be clearly an extremely positive thing, no matter what. It's not going to be doomsday, no matter what, for nuclear strategy.
Speaker 2
It was obviously risky business. The Cuban Missile Crisis, we came pretty close to an all-out nuclear war.
It depends on what we do.
Speaker 2 And I think there's some basic interventions and some very basic statescraft can take care of, can take care of a lot of these sorts of risks and make it manageable.
Speaker 2 I imagine then we're left with more domestic type of problems, like what to do about automation and things like that. But I think maybe we'll be able to get a handle on some of the geopolitics here.
Speaker 1 I want to change tax for our last couple of minutes and talk about evals.
Speaker 1 And it's obviously very related to safety and understanding where we are in terms of capability. Can you just contextualize where you think we are?
Speaker 1 You came out with a triggeringly named humanities last exam eval and then also Enigma. Like, why are these relevant and where are we in evals?
Speaker 2 Yeah, yeah. So for context, I've been making evaluations to try and understand where we're at in this, in AI, for, I don't know, about as long as I've been doing AI research.
Speaker 2 So, previously, I've done some data sets like MMLU and the math data set. Before that, before ChatGPT, there's things like ImageNet C and other sorts of things.
Speaker 2 So, humanities' last exam was basically an attempt at getting at
Speaker 2 what would be the end of the road for the evaluations and benchmarks that are based on exam-like questions, ones that test some sort of academic type of knowledge.
Speaker 2 So for this, we asked professors and researchers around the world to submit a really challenging question, and then we would add that to the data set.
Speaker 2 So it's a big collection of what professors, for instance, would encounter as challenging problems
Speaker 2 in their research that have a definitive, closed-ended, objective answer. With that, I think the genre of here's a closed-ended answer where it's a multiple choice or a simple short answer.
Speaker 2 I think that genre will roughly be expired when performance on this data set is near the ceiling.
Speaker 2 And when performance is near the ceiling, I think that'd basically be an indication that
Speaker 2 you have something like a superhuman mathematician or a superhuman STEM scientist for in many ways for when they're when closed-ended questions are very useful, such as a math.
Speaker 2
But it doesn't get at other things to measure, such as what's its ability to perform open-ended tasks. So that's more agent-type of evaluations.
And I think that will take more time.
Speaker 2 So we'll try and measure just directly what's its ability to automate various digital tasks, like collect various digital tasks, see, you know, have it work on them for a few hours, see if they successfully completed them, something like that coming up soon.
Speaker 2 We have a test for closed-ended questions, things that test knowledge in the academy, and like things like mathematics. But they still are very bad at agent stuff.
Speaker 2 This could possibly change overnight, but it's still near the floor. I think they're still extremely defective as agents.
Speaker 2 So there'll need to be more evaluations for that. But the overall approach is just to try and understand what's going on,
Speaker 2 what's the rate of development,
Speaker 2 so that the public can at least understand what's happening. Because if all the evaluations are saturated,
Speaker 2 it's difficult to even have conversation about the state of AI. Nobody really knows exactly where it's at or where it's going or what the rate of improvement is.
Speaker 1 So is there anything that qualitatively changes
Speaker 2 when,
Speaker 1 let's say, these models and model systems are just better than humans, right? Like exceeding human capability and how we do evals? Does it change our ability to evaluate them?
Speaker 2
So I think the intelligence frontier is just so jagged. What things they can do and can't do is often surprising.
They still can't fold clothes.
Speaker 2
They can answer a lot of tough physics problems, though. Why that is, you know, they're complicated reasons.
So it's not all uniform. And so in some ways, they'll be better than humans.
Speaker 2 Seems totally plausible that they'll be better at humans and mathematics not too long from now,
Speaker 2
but still not able to book a flight. The implications of that are when you have them being better, they just might be better in some limited ways.
And that just might.
Speaker 2 have kind of limited, just influence its domain, but not necessarily generalize to other sorts of things but i i do think it's possible that they'll be better at reasoning skills than us we still could have humans checking because they can still verify if if a if an ai mathematician is better than a human humans can still run the proof through a proof checker and then confirm that it was correct so in that way humans can still um understand what's going on in in some ways but in other ways like if they're getting better taste in things if there's if that makes any sense maybe it doesn't make any philosophical sense that would be pretty difficult for um people to to confirm i think we're we're on track overall to have ais that are like have really good oracle like skills like you can ask them things and just wow it just it just totally said something insightful or very non-trivial or pushed the bounds of knowledge in some particular way but um not necessarily able to carry out tasks on behalf of people for some while uh so i i think this is why we don't take the ais that seriously because they still can't do like a lot of a lot of very trivial stuff stuff.
Speaker 2 But when they get some of the agent skills, then I don't think that there are many barriers for their economic impacts
Speaker 2 or people thinking that this is kind of an interesting thing to this being the most important thing.
Speaker 2 I think that's an emergent property with agent skills that the vibes really shift, and it's pretty clear that this is
Speaker 2 much bigger than some prior technology like
Speaker 2 the App Store or social media.
Speaker 2 It's in a category of its own.
Speaker 1 Well, Dan, thanks for doing this. It's a great conversation.
Speaker 2
Yeah, Glenn. Thank you for having me.
Yeah.
Speaker 1
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 1 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.