America’s New AI Strategy
With all the frenzy last week around Jeffrey Epstein and ColdplayGate, you might have missed an important story: Trump’s new AI Action Plan. Released alongside three new executive orders on AI, the plan emphasizes deregulation, open sourcing, and “anti-woke” models in a race for industry dominance. Today, Nate and Maria get into the details and declare it… not bad?
Further Reading:
Zvi Mowshowitz America’s AI Action Plan Is Pretty Good
For more from Nate and Maria, subscribe to their newsletters:
The Leap from Maria Konnikova
Silver Bulletin from Nate Silver
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
On Fox One, you can stream your favorite news, sports, and entertainment live, all in one app.
It's fing roll and unfiltered.
This is the best thing ever.
Watch breaking news as it breaks.
Breaking tonight, we're following two major stories.
And catch history in the making.
Gibby, meet Freddy.
Debates,
drama, touchdowns.
It's all here, baby.
Fox One.
We live for live.
Streaming now.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OoCla Speed Test, and they're using that network to launch Supermobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
That's That's your business, Supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by OOCLA of Speed Test Intelligence Data 1H 2025.
This message is a paid partnership with AppleCard.
I was just at a theme park in Florida with my almost four-year-old.
Between enjoying the sunshine and the rides, the last thing I wanted to worry about was my wallet.
That's why AppleCard with Apple Pay saved my vacation.
One tap at check-in, and I was off to see the attractions.
Every purchase from hot dogs, and oh, we had hot dogs, to t-shirts earned me daily cash.
Unlike waiting in line for a ride, there is no waiting until the end of the month for rewards.
And my daily cash is automatically deposited into the savings account I opened through AppleCard, where it earns interest.
With Apple Pay's secure technology built right into my iPhone and Apple Watch, I pay to shops, restaurants, and attractions without ever digging from my wallet.
The best part?
No fees, no hassles.
I spent less time managing my money and more time doing nothing short of epic.
Apply for Apple Card in the wallet app on your iPhone.
Subject to credit approval.
Savings available to AppleCard owners, subject to eligibility.
Variable APRs for Apple Card range from 18.24% to 28.49% based on credit worthiness.
Rates as of July 1st, 2025.
Savings on AppleCard by Goldman Sancts Bank, USA, Salt Lake City Branch.
Member FDIC.
Terms and more at applecard.com.
Pushkin.
Welcome back to Risky Business, a show about making better decisions.
I'm Maria Konakova.
And I'm Nate Silver.
Time for an AI episode, I think, Maria.
Yeah, it's been a minute since we had AI-related conversations on the show, but it's obviously a hugely important topic and one that should have been more in the news last week.
It got, you know, got sidelined by other news.
But Jeffrey Epstein, who's that guy?
Don't know.
Don't know, Nate.
But
never met him.
That's true.
Never met him.
But yeah, we'll talk a little bit about about the AI action plan that was released by the U.S.
government last week.
We have lots to talk about.
Before we do, Nate,
we're both back in New York, but I'm actually flying back to Vegas in 24 hours to play in the 25K NBC Heads Up Championship.
Yeah, are you excited?
Have you been taking coaching?
I have.
I have.
So for people who don't know, this was a big show back in the day.
It was before my time, so I never watched it.
Nate, did you ever watch it?
Sure, yeah.
Before your time, you're not that much younger.
I didn't know anything about poker.
It was before my time, like actually knowing anything about poker.
So I didn't watch any poker, know about the existence of any poker shows before I started researching Biggest Bluff.
So that's what I mean before my time, before my time in in the poker world.
Had you heard of poker?
Only because of rounders.
I knew that technically it was a game that existed, but it's not something that was ever really on my radar.
But yeah, this was a huge show on NBC back in the day.
Heads-up poker is one-on-one poker, which is kind of the most competitive in some ways form of poker because you're forced to play all hands basically.
Your ranges are 100%.
So it forces a very different kind of thinking.
And it's being rebooted in conjunction with Peacock and PokerStars.
And PokerGo is kind of, is helping sponsor all of this.
So I am, yeah, I'll be in the PokerGo studios taping and I hope it goes well.
I've been getting coaching from Kevin Rabichow, who's considered, yeah, he's considered one of the best heads-up players in the world.
He has a new heads-up
coaching course that just came out.
But yeah, so I've been, I've been really trying,
you know, usually after the World Series, I'm like, okay, no poker for a while, right?
Just not gonna, just gonna clear my, my, clear my brain.
I'm in the no poker zone for a period of time.
I'm also
bad at heads up, to be frank,
which with some coaching might have been good.
But like, yeah, I can't, can't make the time commitment this time around.
Yeah.
So instead of taking my usual break from, from poker that I do, I've been instead like going hard and trying to prepare myself for this heads up championship.
And, you know, in heads-up, the variance is so much higher, and the card distribution matters so much, right?
So, like,
no.
And if someone is just getting hands, that's kind of, you know, that's it.
Like, they're potentially going to win.
And there have been lots of times where really great players have been just completely obliterated by someone who doesn't know how to play.
Yeah, so card distribution can be a bitch, heads up, or it can be amazing.
Are those mutually exclusive?
No, no,
okay, I'm not going to get into it.
Yeah, no,
bitch and amazing can be, can be within the same match, but it does matter much more so than in full ring poker.
I saw some of the names that we'll be playing, and there's going to be no spoilers, obviously.
Well, I haven't played yet, but I'm not going to be able to tell you how I did until the show airs in the fall.
Yeah.
Since it's not airing in in real time but you will people do know who's playing and um if i'm paired with someone like jason kuhn on my first match
that's that's not going to be fun and if i'm paired with don cheadle who's also playing i think i'm going to be much happier
yeah i i didn't realize there are that many not fish but celebrities that may be fish adjacent potentially although he plays a little bit right he does yeah
um so so we'll we'll just see um
and i'm i'm excited for it no matter how it goes and kevin has been very good at preparing me for both heads up and also for the eventuality that he as one of the best and at times probably the best in the world has sometimes busted in the first round right in matchups like this because he plays a 25k heads up at the world series every year are you worried about like physical reads and things like that because like that can be more of a factor too yeah you know so i'm i tend to be um you know i i hope that people can't get um a lot of reads off of me but you just never know and this is actually hilarious nate the rules of this when i started reading it i just burst out laughing because they said that you are allowed so people who don't play poker this is not normal you're allowed to expose one or both of your cards at any point during a hand you can't cash usually but not in tournaments but this yeah this is a tournament yeah so you can expose your cards, one or both of them.
And they said they encourage speech play.
You can talk about the contents of your hand.
So basically everything you're not allowed to do in tournaments normally, you're being encouraged to do to make good television.
I'm not going to be showing my cards.
Okay.
I've become more of a speech play guy over time.
Oh, really?
Yeah.
Okay.
Okay.
Give me an example.
You know, like the Europeans, I'll like try to get info out of them.
We're so quiet all the time that like I want like, I just want like any vibe I can get pretty much.
Or with like really bad players who like often will reveal information just to gauge their comfort level.
It's kind of in a proto trial and error zone for right now.
I'm trying to like kind of like collect a mental database of like how people react to different things.
I don't know.
It's kind of, I mean, because so many decisions are so close to equilibrium that like any vibe you can get.
And I like recently cash games more than tournaments.
And they're kind of looser in various senses of the term right people are chattier you reveal more info voluntarily or involuntarily i like cultivating that environment i think yeah no i'm someone who definitely is chatty and tends to be like very friendly at poker tables but you know i'm not someone who's going to be like oh you know do you have aces like do you do things like that or is it because you know there are players who do or is it something a little bit different just to try to engage yeah maybe it's like you're someone who decided to fold already if they want to like reveal something that they might not reveal otherwise or
to set a baseline for the next time we play a hand, things like that.
Yeah, that's that's actually smart.
So, how do you reveal afterward, right?
If you're friendly in the right way, they might reveal information later on, right?
For sure.
Yeah, that's actually something that I have capitalized on because I am a friendly player.
Like, I'll sometimes when it's obvious, like I've made a big fold or something like that, and I like make it obvious, they'll show me.
And that's that's very nice because it's good information.
But I'm always, I do try to be careful.
And I tell people this, you know, with
poker and with just like reads and our ability to spot deception in general, that we do tend to be not great at it as human beings.
And the other kind of the other element of that is that we think we're better than we are.
And we also often reveal information when we're engaging in speech play without realizing it because of how we're, how we're talking, what we're doing.
and so it's something that i think just in general with the psychology of the thing you have to be careful it's all contextual any poker tell speech play it's all about semantic context so to speak for sure for sure um on that note nate shall we uh switch gears and um
talk about AI
and the news from last week, which got buried because there's a lot of news
in the news.
There's a lot of news in the news these days, Nate.
That's a very deep insight.
So
last week,
President Trump released the AI action plan that talks about what the administration wants to kind of see going forward in the AI space.
And I think the name of this action plan is actually quite telling about the contents.
It says Winning the Race, America's AI Action Plan.
So it's titled Winning the Race.
And that's how how it's framed, right?
It's framed as a race between the U.S.
and everyone else, especially China, but really everyone else.
Mostly China.
No, mostly China.
Mostly China, but mostly China.
There's not really like a third.
No, there isn't.
And the
overall point of all of the different sections is how can we make the U.S.
the leader in AI, the most competitive?
And how can we do that by also kind of
screwing over China in some respects by not allowing it access to the types of things that we'll allow our allies to access.
And some of the things in it are contradictory, but that's kind of the overall message.
Did you read the report, Nate?
Do you have any initial thoughts?
So Zvi Mashowicz is one of the most prominent and best AI bloggers.
He writes at the Substack, Don't Worry About the Vase.
And he calls it, as he sees it, he's more concerned about P-Doom than a lot of people.
P-Doom is the possibility of things going really badly, extinction, loss of control etc and he said he thought the plan was actually his term pretty good i don't know his politics in detail but he really dives down into like every last word you can read that so i'm kind of relying on on that for like
general vibes right i mean it is interesting that like even like
Even people who are concerned, like Zwe about AI safety, deeply concerned in his case,
have this ambiguous relationship with China, right?
Where if we were the only country developing AI and we had like centralized control, then it would be easier to argue that, okay, let's be appropriately cautious here, take it slow, right?
With China,
you know, a lot of people, I don't want to attribute a view to him, do think that, well, we have no choice but to race.
And
maybe we can race in ways that involve some degree of coordination or cooperation.
It's hard to run races like that.
But, you know, I mean, the notion of like securing American leadership instead of ceding it to China, I don't think is terribly controversial.
I mean, there's a lot
in the plan, though, right?
There are some Trumpian elements around factors like the political orientation of AI models, right?
Where they don't want them to be too woke.
They don't want them to be too political, quote unquote, right?
Yeah, which is one of of the things.
Which is hard.
Which is hard and which is one of the things that Zv does point out.
I read this as well.
It's
one of the things that he points out is actually
probably not possible given the type of language that Trump employs and that as a psychologist, I would say is not possible given how AI is developed because
there's bias everywhere and there's no such thing as like, this is the objective thing.
And if you're constantly policing quote unquote anti-woke, even if that's kind of the unbiased position, but you're like, oh, no, that's, you know, then you're going to, it's problematic.
It's problematic in terms of how you're, how you go about it.
Yeah.
With, with Grok, or more technically, the Twitter bot Grok, which is an instance of Elon.
Elon's overall model, Grok, right?
Elon just did a couple of like subtle things.
It was like, oh, don't be too woke here.
Don't trust the media too much.
Right.
If you read the actual changes to the architecture, they weren't that profound.
And yet Groc was calling itself, I believe the term is Mecca Hitler after some period of time, engaging in elaborate rape fantasies.
I wish this were a joke, but it's not.
And like, so.
Yeah, I mean, if you kind of nudge a model to override its inputs, it's hard to do so in a way that like
has any type of dexterity.
I mean, Google had this problem too, right?
When they came out with Gemini two years ago, a year and a half ago, right?
They were like, oh my gosh, yeah, people are, there's too many like white people, white men in these photos.
So like literally
it was forced to draw like multiracial Nazis.
It always comes down to Nazis.
And by the way,
Why?
Well, in part because like if you're kind of mining text of arguments on the internet and on Twitter, um people invoke nazi comparisons all the time right who's the hawker sidney sweeney right there was some a new american eagle commercial where like she's an impair of jeans and she talks about how she has good genes a little weird right but like people
it was kind of nazi
it was a little weird however idiots on the internet were like this has nazi symbolism in it you know not just alluding to that saying the word nazi so if you're like grok you're like, okay, well, you want like kind of the garden variety political epiteth and calling people Nazi or Hitler, which should be more forbidden, you would think, but that taboo is violated all the time.
And so an AI trained on political speech, if you go and say, yeah, don't be so woke here, right?
Well, can kind of go into like a downward spiral, like some people do, where before long it's calling everybody or itself Hitler.
Yeah, no, I think that that's absolutely true.
And, you know, it also,
your point
actually makes a broader point about this
AI action plan where a lot of the things, and I did skim through it.
I didn't read the whole thing word for word.
It is small print on 28 pages.
And I had other things to do this week, such as prepare for the heads up championship.
So I didn't read the whole thing, but a lot of it actually, like, there are things that sound reasonable in theory, but there's no practical way of like, how is this actually going to happen?
How do we make this happen?
And so, you know, I think that this is a very concrete, this like anti-woke is a very concrete example where like, okay, well, even if in theory we're saying we want everything to just be unbiased, let's like, let's just remove anti-woke, right?
We want everything to be like as unbiased as possible.
Let's pretend that that's actually what it said, even though it didn't.
It did use the terminology anti-woke.
And then there's even an anti-woke executive order after it.
Let's pretend it didn't.
Like, let's just pretend it was, we do not want bias.
Great.
I agree with that.
How do you implement it?
Like, how do you actually like do those weights?
How do you, it's, it's basically impossible.
If not, like, it's a ridiculously thorny problem.
And that is just a very specific thing that you can kind of grab onto to realize how a lot of these statements might sound good.
And like, okay, well, this is a reasonable aim.
But if you don't actually have the implementation down and the nitty-gritty of how to do it, it's pointless.
It's just words.
It's just rhetoric.
And we'll be back right after this.
There's another really interesting part of this action plan that I think we should talk about because it was something that the Biden administration also struggled with, which is what do we do in terms of proprietary versus open source models right do we encourage people to kind of share all the code so that anyone can use it and anyone can kind of build on top of it or is this something that we want to kind of keep inside
kind of keep internal China went open source right almost immediately with kind of deep seek all of that those are open source models and in the US the Biden administration didn't really like they just like punted on it they didn't they didn't have a position and we do have both right we have proprietary models we have some open source stuff and there are arguments on both sides but you had started off by talking about p-doom which is not something this report mentions at all and one of the arguments against um having everything be open source is well okay great but
then you know rogue actors, people who don't have the best interests of the United States in mind, will have this code as well.
It's kind of the same arguments that have have been made in the past with like when people provide the full genomic sequence of like deadly viruses, right?
Or things like that, or, you know, instructions for how to build kind of a bomb step by step.
Like, hey, guys, like,
why are we actually making all of this information publicly available?
So that's kind of, I think, the strongest case against open source.
There's a lot of pro.
Let's talk about con first, yeah.
Well, look, I think part of it was there's a period of time where I think the AI safety folks thought that, hey, maybe we could just kind of
contain it to Open AI,
Anthropic, and Google are like, or have been like the big three, right?
And maybe that we can avoid like a race dynamic if there's a finite number of players and they all believe in AI safety, right?
And I think now,
A, there's like less belief that like
OpenAI in particular is that concerned about AI safety, despite its alleged mission, right?
You know, B, we've seen with DeepSeek, the Chinese model, that like with some clever engineers, you can probably have a model that's like only
six months to a year behind the frontier, probably not on the frontier.
So attitudes have changed, again, kind of as a resignation to the inevitability, I think, in part.
And also, there is some protectionism, right?
Like Meta has championed open models.
Why?
Well, partly because Meta's models don't suck, but they've been considered behind to the point where they're literally offering people AI engineers, in one case, allegedly according to Wired, as much as a billion dollars to join Meta so they can catch up and become like a fourth part of the big four, I guess it would be, right?
So, yeah, I mean, when it comes to election analysis and other models, oftentimes I found the people in that space who are advocating for total open source everything are people that like, A, don't need the income, or B, their models kind of suck, right?
They're like, we're doing this for the good of the community.
It's like, well, you're doing that because you're not going to, it's not really good enough to sell that product for a premium price, whereas I think mine is, for example, right?
So like, there are all types of ulterior motives.
There's regulatory capture, right?
Of course, Sam Maltman might say, well, yeah, we have to be regulated.
We can't have any mom and pop shop running an AI model.
So therefore, just us three, right?
Just us Google and Anthropic, and therefore we capture all the market share in perpetuity.
Yeah.
So the incentives here are definitely not as aligned as one might think.
And I think you're also kind of subtly raising another point here, which is that in general, and stop me if you disagree, but it seems like over the last year, sure, like people are
worried about AI and like PDoom and all of this, but it seems like that discourse has actually quieted down.
And instead, people are like,
let's be more competitive with China.
Okay, fuck it.
Like,
let's try to push to do what we can to like.
to make it really good.
And I think that people have actually just, maybe it's because they've become, you know, just more used to having AI, whatever it is, it feels like the safety concerns have taken kind of a secondary role to capitalistic and competitive instincts.
I think there is maybe some view that like
this year has not seen as much progress in AI timelines as people would have thought back in January.
That might be debated.
I mean, I think people are also like
a little reluctant to talk about this in the AI safety community because they don't want to make it seem like their guards are down, right?
But that's my sense, right?
Almost more about what's being kind of unsaid.
Now, also, people can get bored of saying the same thing over and over.
There's a new book out very soon about how AI is going to kill everybody if we don't learn how to control it quickly, right?
From people, Elizabeth, Yudkowski, Nate Soros, who are doomers, but very smart people, widely respected.
I mean, we'll have one of them on at some point.
I don't know, right?
But yeah, and I think that's just kind of like also maybe a view that, like, okay, this this technology is interesting and economically powerful and kind of fun, right?
So we can't talk about the same thing all the time.
I don't know.
That's my impression.
Yeah,
whatever the case is, whatever's going on behind the scenes, I do feel like, well, this report, the AI report, didn't really mention it at all, right?
Like there was very little talk about safety or concerns.
And there was some hand waving to like, oh, you know, if anything seems like it's, you know, getting bad, we'll fix it.
But
even Gavin Newsom vetoed a bill last year called SBC.
Yeah.
I don't know which number it was, right?
That would have regulated all the AI labs in California, since they're all currently have substantial economic nexus in California.
They would have all have been affected by it.
So I don't know, right?
Maybe it's like a lot of things where something really bad has to happen.
And what people like Eliosi are concerned about that by the time the bad things happen, it's too late, right?
Again, I'm skeptical of
the idea of like a super intelligence explosion, quote unquote.
We talked in the poker episode a month or so ago about how like AI progress has been patchier.
It's already superhuman in some areas.
It's kind of deficient in others, right?
And as a near daily user of these products, right?
I mean, there are times where I think it's thinking and being very smart and times when it can't fucking add your numbers together, right?
So, and so
like in some ways, the Grok thing
was scarier than like the Google thing because like Elon had very relatively gentle changes to the architecture, produced these profound changes where it was literally calling itself Mecca Hitler in outputs, right?
At the same time, that gets to the race dynamic, right?
That model had not been, because what happens,
you program an AI, right?
You train it, you give it a system prompt, and then you test, test, test it, right?
And whenever it says Hitler, you say, no, you're being a bad boy.
If it's not in the context of World War II or something, then please don't mention mention hitler so much right and you know if you spank it really hard it learns to avoid that right but if if you're if you're just kind of updating the prompt testing it with two of your buddies and then releasing it you might you might miss those so there is fundamentally a speed off or a trade-off a speed off a trade-off between speed and safety right um it's maybe not a one-for-one trade-off but it's probably pretty inversely correlated it is it is and by the way in the grok example i think it's also interesting that
some of the coding that was revealed said also to check what Elon Musk thinks about this, right, before giving your answer.
And
this was not a
blip, like multiple people replicated that this was actually in the coding.
And as you've already said, like OpenAI said before that it was, you know, safety first, and that's clearly not the case.
And they've shown that to not be the case.
So I think we do need to keep asking questions about incentives, about alignment,
and about kind of who's benefiting from what type of advances in the AI community.
Yeah, I mean, the AI schedule, I mean, it felt to me like Grok kind of almost took on like a personality or became like kind of a self-caricature, right?
Where it can be recursive, right?
If it's like searching
output on X, including its own output, right?
It might say, okay, this is how Grok behaves.
And like,
let me be careful about this.
I mean, Grok can kind of be in a dark way
kind of funny,
whereas
the other A models are
not, right?
They're kind of goody two-shoes aiming to please.
Oh, that's the most brilliant article I've ever seen.
Nate, you know, here are 27 typos if I give it a copy edit task, right?
And this is a counter to that, but like all these things are tied together, right?
Like.
Wokeness is also tied to kind of a certain type of academic expertise, you know what I mean?
Which I call the indigo blob, how like, yeah, the liberal media is biased and liberal, but also it's generally more accurate and has more expertise in the conservative media.
Both those things are true.
If you crunch a bunch of data in from the internet into vectors, then that gets hard, right?
It gets hard because you might throw the baby out with the bathwater, right?
That you want to like...
have it reflect the expert consensus.
Well, the experts have bias that
sometimes creeps into that consensus, consensus, right?
And it's very hard for, again, for newsrooms and journalistic institutions to do it, but just as hard for AI models.
Yeah, and as AI models become more integrated and become kind of more a part of the way that people do work, and we see more outputs from AI models and their training materials shift, like these are things that can become even worse kind of in the future.
It is kind of this cycle that can self-reinforce.
And we'll be right back after this break.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OOCLA Speed Test, and they're using that network to launch SuperMobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
With Supermobile, your performance, security, and coverage are supercharged.
With a network that adapts in real time, your business stays operating at peak capacity even in times of high demand.
With built-in security on the first nationwide 5G advanced network, you keep private data private for you, your team, your clients.
And with seamless coverage from the world's largest satellite-to-mobile constellation, your whole team can text and stay updated even when they're off the grid.
That's your business, Supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by OOCLA of Speed Test Intelligence Data 1H 2025.
Be honest, how many tabs do you have open right now?
Too many?
Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.
Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.
Everyone's cooped up in their house.
I will talk to this robot.
If you're a truly engaged activist, the government already has data on you.
Driverless cars are going to mess up in ways that humans wouldn't.
Listen to Close All Tabs wherever you get your podcasts.
Let's be real.
Life happens.
Kids spill.
Pets shed.
And accidents are inevitable.
Find a sofa that can keep up at washable sofas.com.
Starting at just $699, our sofas are fully machine washable inside and out.
So you can say goodbye to stains and hello to worry-free living.
Made with liquid and stain-resistant fabrics.
They're kid-proof, pet-friendly, and built for everyday life.
Plus, changeable fabric covers let you refresh your sofa whenever you want.
Neat flexibility?
Our modular design lets you rearrange your sofa anytime to fit your space, whether it's a growing family room or a cozy apartment.
Plus, they're earth-friendly and trusted by over 200,000 happy customers.
It's time to upgrade to a stress-free, mess-proof sofa.
Visit washable sofas.com today and save.
That's washable sofas.com.
Offers are subject to change and certain restrictions may apply.
I don't want people thinking that like an AI is like a oracular
perfect entity either, right?
You know, Paul Krugman had a thing about like
Grok a couple of weeks ago where he's like, well, if AI has a liberal bias, because reality has a liberal bias, right?
And I'm like, well, maybe on some things, but like, you know, and he's actually pretty skeptical of AI.
Like, that's not, you know, look at any algorithm.
And
AI probably, an ALM is probably too complicated to describe as an algorithm.
It has like some membership behaviors, but still, directionally speaking, right?
Any algorithm is imperfect.
You know what I mean?
It can make bad predictions,
can reflect bad data.
And the AI is that, especially if they're not fine-tuned,
which requires human input, right?
That's how they're fine-tuned.
They have like a bunch of people being like, yes, no, yes, no.
Then, I mean, I think garbage in, garbage out is not quite the right case.
But
directionally speaking,
that points to what we're talking about, which is that the inputs really, really do matter.
And I think the other thing is that there is this human bias to think that, oh, well, this is data, this is a machine, so it's more unbiased because it's not human.
And that's simply not true.
And if we have that bias to say, oh, well, this is like, this is objective truth because it's coming from a not like it's coming from a program.
That's that, that, that ain't where, that ain't where it's at.
Like, that, That is actually a highly problematic way of thinking.
Yeah, and look, what its objectives aren't clear, right?
I mean, in one sense, large language models are trying to minimize the loss function from accurately reflecting what's represented in the data.
On the other hand,
it's contradicted by the human feedback that it gets, right?
And so it's kind of trying to please its creators to whatever extent it can to the point of being kind of a sycophant in some cases.
So like, and,
you know, I think people understand understand more about interpreting AI outputs than they did a year or two or three ago.
But it can be fragile.
You know, small things can
affect the entire
system, right?
It's complexity.
It's a kind of origin of complexity theory really is well reflected in AI, where a butterfly flaps its wings in Beijing, there's a tornado in Texas, right?
You know, Elon puts one change to the system prompt in place.
It doesn't seem to matter.
But when you put that change in, and then you say, also, please make sure you're reading what Elon would write, right?
Those things combined can have a profound effect.
And this is just something that's visible.
Now imagine all the things that are invisible right behind the scenes, like tiny tweaks like that that we don't see because it's not actually visible in like your immediate LLM output unless you really try to kind of find it.
No, and we've moved away from transparency in general.
I mean, not just in Google's AI, Gemini, but like in Google search.
there's a lot of intervention, right?
That if you search for occupations, just a regular Google search for photos of like doctors, it will be very conscientious to show,
I mean, there might be more women doctors now.
It's become, but like, but it will show like doctors of all races and women doctors, right?
If you probably, if you search for a hockey player, it'll make sure, I mean, there are some great black hockey players, right?
But it's a lot of white, Canadian, and European.
kids, right?
And so, you know,
yeah, the notion of, oh, the web is just a search right or large language models are just kind of a neutral presentation of a clever way to like matricize internet data and writing i mean it's never been that and
the consensus has moved away from transparency if anything almost toward like well um if we're transparent hyper transparent then i'll just give you more things to critique or to pick on and so so sorry you know yep if you like the model then use it if you don't like it then use a different model instead yeah and there is this tension here once again with going back to what we were talking about with open source, right?
Like, there, there's definitely this tension between open source, and that's actually some of the good stuff of transparency, right?
That you can actually look at the inputs and actually try to figure out what's going on.
Are they like that's one of the benefits of having open source, transparent models that people can kind of pick apart and play with?
We've talked about kind of the risks, you know, p-doom, et cetera, et cetera, but there are benefits, and transparency is certainly one of them.
Yeah, there have been many times when we've had models of different kinds running, right?
And people are like this seems wrong right why is this pull weighed a certain way and like yep 80 of the time it's part of symbios but 20 time they've caught something for sure right and even
even for myself if i have a model making sure i output different data at different stages to evaluate it um
if you don't catch a problem early on when you're building a complex you know like this nfl thing i'm working on right there are lots of component parts it'll eventually be a couple thousand lines of code, none of which individually are that complicated.
But if, like, if there's one step that's wrong, it can infect the entire system, right?
And if you're smart, you're building robustness and redundancy.
You know, every major computer program has bugs, every model has bugs, right?
But like, but like outputting that and being like, I want to visualize this, I want to look at the right inputs, right?
I want to look at edge cases that are important.
Like, that's an important thing to do.
Yeah, it absolutely is.
And, you know, to kind of defend open source,
that's one of the big things about data integrity just everywhere is
open source is good, right?
So there was a huge crisis in academia with, you know, with cheating, with fabricating data, you know, with papers that were published in important journals with big results that ended up being manipulated.
And so there's more and more of a movement toward, you know, not just pre-registering your study, but sharing your data, right?
Like actually sharing the data sets so that people can look at what the source material was.
And, you know, like one of the biggest scandals that we talked about briefly on the show with Dan Ariely
and Francesca Gino,
they, you know, showed that some of those data sets were manipulated, but that was only after they were forced to provide the data sets, you know, after some good, some eagle-eyed researchers were like, hey, like these,
what we're seeing in the paper doesn't make sense.
And sometimes it's willful manipulation and sometimes it's just a mistake, right?
There have been some science, like there was one very famous scientific study where they fucked up an Excel sheet where they, by mistake, like moved the rows down by one.
And that just completely, it was just, it was horrific because this is a medical study, right?
So actually tiny things like that, it's incredibly important to have access to the raw data to be able to look at this, to manipulate it on your own.
People can really spot both malfeasance and just human error, which will happen.
And with AI, we'll have both human error and computer error, right?
As we're as we're having AIs do coding for us, you know, vibe coding, all of that.
There might be errors there.
We need people, we need people capable of looking at it, debugging, and trying to figure out, hey, this seems like it's working great, but actually it's not.
No, a well-designed model, including a large language model, is kind of like an airplane, right?
There are like multiple redundant systems.
There are different levels of safety and safeguards.
And like that's hard to do.
It requires a lot of work and expensive budget and being an actually good programmer.
Yep.
One of the other major things that I saw in this action plan that I think is actually quite important and again, there are arguments on both sides is like one of the big bottlenecks to AI development is energy, right?
Because AI just like takes a lot, a huge energy cost.
And access to data centers, like that's, that's actually been a big thing where data centers like can't, you know, they've tried to bypass the power grids.
There's been, you know, pushback, et cetera, et cetera.
So one of the things that this plan tries to do is say, let's eliminate all the barriers and let's try to use all the energy we have available for AI.
Yeah, I mean, look, I think this is like a little overclaimed, right?
If you look at like the amount of
actual wattage required to like run a Chat GPT response, it's like this is kind of like a,
I don't know, it's a fairly bullshitty claim based on the present capabilities of AI.
So, Nate, let me just push back a little bit because there's a related issue, which is the availability of water to cool these data centers.
And we have already seen that there has been, you know, there have been instances of wells, town wells running dry.
There's been instances of construction having to be paused because
of basically water shortage issues.
So I think that that's a related concern that we should have just discussed.
I think you're getting this from low-key eye, Maria.
Maybe, maybe, Grock, what do you have to say about this?
I'm skeptical of I think it is a big, I think it is a big deal.
It's not relative to other uses of electricity, though.
It's a small piece of the pie.
And now, granted, Sam Altman and all want to be bigger, right?
Right.
Yeah.
And I, you know, it's hard for me.
I don't have the numbers in front of me.
So I'm not going to say something and have it not be true.
So I'm just, I'm not going to engage that point because I just honestly don't know.
However, it is something that this bill is trying to address and trying to give AI companies whatever resources it needs to access power grids, access energy grids, so that data centers can be built on federal land and kind of allow innovation.
to proceed without there being an energy bottleneck.
I think that that's kind of objectively what this bill is trying to accomplish.
Yeah, look, I mean, by the standards of Trump, it's a pretty serious document.
The one good thing about their alliance, maybe not the only good thing, right, with the kind of tech right, is that they do, I think, have like
competent people in place who might have different positions on AI, but like this isn't an amateur-ish document.
It's thoughtful.
It involves politics, but like there was some care put into this.
Yeah, for sure, for sure.
Very unlike Trump in most ways.
I agree.
I agree.
Reading it, you know, obviously you can see the Trumpian language in parts of it, but
there are nuance points here where it will be interesting to see how, if at all, any of this is enforced and kind of what
ends up happening.
I think we'll just have to see over the next six months, one year,
and see how a lot of these provisions shake out.
By the way, there are a lot more provisions.
We're not going to talk about the whole thing.
Nate, as you said, Zve has a a good sub stack on this.
So for people who want to know more, I think we'd both encourage you to read that.
But yeah, all in all, you know,
some things where you're like, okay, yeah, this makes sense.
Some things that are a little, you know, eyebrow raising,
not enough care shown to, or not care, that's the wrong word.
But I think we're seeing speed over safety in a lot of this.
And how do we facilitate speed instead of safety?
So I think that that, to me, is the big thing to kind of watch out for and to keep an eye on.
Thanks for listening.
We're taking next week off and we'll be back in your feeds on August 14th.
As always, we have some additional content for premium subscribers who also get ad-free listening across Pushkin's entire network of shows.
This week, we're answering a listener question about how to teach poker to kids.
That's coming up after the credits, so you still have time.
Subscribe now for just $6.99 a month.
Let us know what you think of the show.
Reach out to us at riskybusiness at pushkin.fm.
Risky Business is hosted by me, Maria Konakova, and by me, Nate Silver.
The show is a co-production of Pushkin Industries and iHeartMedia.
This episode was produced by Isabel Carter.
Our associate producer is Sonia Gerwit.
Sally Helm is our editor, and our executive producer is Jacob Goldstein.
Mixing by Sarah Bruguer.
Thanks so much for tuning in.
You've probably heard me say this.
Connection is one of the biggest keys to happiness.
And one of my favorite ways to build that, scruffy hospitality, inviting people over even when things aren't perfect.
Because just being together, laughing, chatting, cooking, makes you feel good.
That's why I love Bosch.
Bosch fridges with VitaFresh technology keep ingredients fresher longer, so you're always ready to whip up a meal and share a special moment.
Fresh foods show you care and it shows the people you love that they matter.
Learn more, visit BoschHomeUS.com.
Lily is a proud partner of the iHeartRadio Music Festival for Lily's duets for type 2 diabetes campaign that celebrates patient stories of support.
Share your story at mountjaro.com slash duets.
Mountjaro terzepatide is an injectable prescription medicine that is used along with diet and exercise to improve blood sugar, glucose, in adults with type 2 diabetes mellitus.
Maljaro is not for use in children.
Don't take Maljaro if you're allergic to it or if you or someone in your family had medullary thyroid cancer or multiple endocrine neoplasia syndrome type 2.
Stop and call your doctor right away if you have an allergic reaction, a lump or swelling in your neck, severe stomach pain, or vision changes.
Serious side effects may include inflamed pancreas and gallbladder problems.
Taking Maljaro with a sulfinyl norrhea or insulin may cause low blood sugar.
Tell your doctor if you're nursing pregnant plan to be or or taking birth control pills and before scheduled procedures with anesthesia.
Side effects include nausea, diarrhea, and vomiting, which can cause dehydration and may cause kidney problems.
Once weekly Mount Jaro is available by prescription only in 2.55, 7.5, 10, 12.5, and 15 milligram per 0.5 milliliter injection.
Call 1-800-LILLIRX-800-545-5979 or visit mountjaro.lily.com for the Mount Jaro indication and safety summary with warnings.
Talk to your doctor for more information about Mountjaro.
Mountjaro and its delivery device base are registered trademarks owned or licensed by Eli Lilly and Company, its subsidiaries or affiliates.
When you buy business software from lots of vendors, the costs add up and it gets complicated and confusing.
Odu solves this.
It's a single company that sells a suite of enterprise apps that handles everything from accounting to inventory to sales.
Odu is all connected on a single platform in a simple and affordable way.
You can save money without missing out on the features you need.
Check out odo at odoo.com.
That's odoo.com.
This is an iHeart podcast.