What’s the Optimal Personality for AI?
This week Nate and Maria discuss the release of GPT-5, the latest model from OpenAI. This model promises to be faster, smarter, and more useful while also reducing hallucinations and sycophancy. It also lets users choose among different AI “personalities.” What do Nate and Maria think so far?
Then, they turn to the newly inked Nvidia trade deal, which notably includes a 15% cut of sales to China for the US government
Further Reading:
Ethical Issues In Advanced Artificial Intelligence by Nick Bostrom, 2003
SuperIntelligence: Paths, Dangers, Strategies by Nick Bostrom, 2014
For more from Nate and Maria, subscribe to their newsletters:
The Leap from Maria Konnikova
Silver Bulletin from Nate Silver
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
On Fox One, you can stream your favorite news, sports, and entertainment live, all in one app.
It's fing roll and unfiltered.
This is the best thing ever.
Watch breaking news as it breaks.
Breaking tonight, we're following two major stories.
And catch history in the making.
Gabby, meet Freddy.
Debates,
drama, touchdowns.
It's all here, baby.
Fox One.
We live for live.
Streaming now.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OoCla Speed Test, and they're using that network to launch Supermobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
That's That's your business, supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by OCLA of Speed Test Intelligence Data 1H 2025.
When you buy business software from lots of vendors, the costs add up and it gets complicated and confusing.
Odoo solves this.
It's a single company that sells a suite of enterprise apps that handles everything from accounting to inventory to sales.
Odo is all connected on a single platform in a simple and affordable way.
You can save money without missing out on the features you need.
Check out Odo at odoo.com.
That's odoo.com.
Pushkin.
Welcome back to Risky Business, a show about making better decisions.
I'm Maria Konakova.
And I'm Nate Silver.
So today on the show night,
we've got a pretty interesting risky business-y
news week.
We've got GPT-5 being released, which
is maybe less of a deal than the launch hype announcements were, but still a big deal.
And then we have some interesting trade stuff going on with AI chips, right?
With NVIDIA.
Interesting is an interesting way to experience it.
Yes.
Well,
we've got GPT.
You know, there have been five GPTs since the last GTA, Grand Theft Auto.
I'd throw that in there.
Fascinating.
Well,
before we get into it, Nate, I just wanted to say, you know, we're taping this on the 12th of August, Tuesday.
So a few days before listeners will hear it.
But congratulations.
Today is the launch of the paperback of On the Edge.
On the Edge, the art overseeing everything, a bestseller by groundbreaking author Nate Silver.
Yeah, so the paperback is out.
There's a new forward, or I think it's called a preface, technically.
I was just at Barnes ⁇ Noble signing some copies, a little sweaty.
I walked here from there.
It's hot in the middle of August in New York, breaking news.
But yeah, no, I think the book holds up really well.
It covers a lot of really risky business-esque topics.
And, you know, it's a big book, cheaper now with paperback.
It fits better on a shelf, not quite as thick.
And there's new content.
So I would strongly recommend, of course.
I mean, a little biased here, but like, thank you, Maria.
Of course.
Well, I'm excited to read the new preface.
And yeah, I definitely recommend the book to everyone.
I will try to repost the review I I did of it on my sub stack so that people can get reintroduced to it one more time.
Anyway, it's a fantastic book.
Congrats, Nate.
And let's get into some Rivarian topics like the release of GPT-5 from OpenAI.
Nate, have you had a chance to use GPT-5 yet?
Well, you don't have much choice.
You know, they kind of steer you into GPT-5 if you, and at first I'm like, oh, okay, I guess I pay for the pro plan.
And so I'm like, oh, wow, I'm one of the privileged few.
And now I don't think there's an easy way to get back into all the old family of GPT products that we knew and loved before.
Like, people did get attached to the different models for different reasons.
And so now it kind of is implicitly trying to figure out what you want, which is kind of part of, I mean, it's interesting, right?
So on the one hand, you know, I kind of made the joke before about kind of comparing it to like a video game release, but anything new, if we release a new presidential model or something, right?
You know, anything new might have some kinks and bugs.
I mean, as much as you might say, okay, we have to have it perfect before it ships.
I don't think that's practical because, like, most people are going to only learn things.
I mean, you know, I think they did discover that's not doing what like Grok did and calling itself Hitler, for example, right?
Um, yay.
I mean, I don't know if we could really call that a win.
Like, that seems to be like a baseline for like Google Gemini drawing multicultural Nazis, you know, so the bar is pretty low.
But no, I think it has a little bit of new car, new model kind of smell a little bit.
The first thing I asked it to do was my partner and I are planning a trip to mess with Scandinavia or the Nordics, technically, right?
We're like, all right, we want to visit these cities, give us an itinerary, right?
And it like thinks
and thinks and has something that seemed plausible to me, very detailed, right?
And then I'm like, email it to me as a PDF.
And it like freaks out, right?
It like shows some complicated like, you know, code, Python code.
And it's like, I don't know how to do this.
I don't know how to do this.
So, you know, it's, it's up and down.
Well, hey, at least it gave you a plausible itinerary.
I, so
we've been warned.
We were given an explanation by Sam Altman because I tried to test it, you know, when it was just released.
And apparently the auto switching tool was down.
So it seemed like it was a lot dumber than it was supposed to be.
So one of the things that they're kind of really touting on this model is that it automatically knows what you want, right?
Whether it should think deeply or not, which does not actually seem to be the case even even now when the switch is back.
Maybe it's gender.
It's like, hey, we got a chick here.
I don't think we need too much deep thinking.
She probably wants a quick answer, get back to the cooking.
That's exactly.
Honestly,
that's been my experience.
And one of the first things I actually tested it on, because it says that, you know, one of the things that it's much better on is on hallucination.
So I actually gave it a psych question and asked for some sources and like papers and it unfortunately still hallucinates when you go down that route i was because so i'm writing a piece uh this week about kindness contagion you know when someone is nice you know how does that spread and so i i asked it some questions about the research in that field and it i didn't really need its help in the sense that i know the field pretty well but like i was just curious to see what it would come up with and it gave me some good stuff but it also gave me some some stuff that just simply does not exist and I know, you know, as you say, and as a lot of people say, like, you need to know what to ask it, but the reason I tested this specifically was because one of their big claims was no more hallucination basically which is not true yeah I've used it for you know I almost always put the articles I'm writing for the newsletter through a copy edit and fact check and in the GPT models or cloud sometimes I think this is one of the most useful things that AIs are good with
it took seven and a half minutes for what for Nate is a relatively short article 700 words or something right I am using like the thinking version right?
Did you tell it to use the thinking version or did it?
I think I was on thinking by default, right?
And usually I say have a high threshold and I didn't say that, but it was really nitpicky.
It's like, you know, I made some line about like
Elon Musk is like tweeting out like anime
smut was a term I used, right?
I already toned that down from porn and it's like, you should be more careful.
You should say NFW
images.
Smut implies an editorial stance.
you know, so like it's like it was kind of nitty is a poker term, right?
And like, I um
I had it actually, a friend sent me a poker hand because I wrote before, or I wrote before about how ChatGPT is bad at poker, and it played the first hand really well.
And then I'm like, okay, simulate more hands, and then it was still kind of not great, maybe a little better, right?
Yeah, no, it's a little weird because I don't, look,
to me, you kind of had 4.5 and you had like 03 and 01, right?
Like, I noticed in the spring, I thought some of the reasoning quote-unquote models for the type of stuff I'm doing, which is not using GPT as a chatbot, right?
It's like a work aid, you know, I thought there was improvement then.
And I kind of feel like there's a bit less this time.
You know, also the Open AI models, or some of them are kind of slow, right?
I've found sometimes like
Claude or Gemini or even Grock will like spit out things faster.
And so, you know.
And sometimes that matters, by the way.
Like sometimes you want it to take time, but sometimes like you're like, okay, come on, like let's let's get let's get moving.
And yeah, that's it.
Yeah, no, it can get me a simple, you know, and often like in a fact check.
I mean, today I had to go to the freaking Barnes ⁇ Noble, right?
And so like the time pressure is often relevant, especially if you know, and oftentimes I also use the models a lot for like little programming tasks, right?
Like I forget the programming language I use is called Stata.
Some people would say it's kind of a fake, but you know, it's a it's a real
language, right?
And I'm like, I forget how I do this and stat or full disclosure.
Even in Excel, I'll be like, oh, guy, what's the complicated formula for this?
Right.
And like, it seems to me to be 70 to 80% reliable, the AI models in general for like, you know, where I want a snippet of code that's like, you know, three to 10 lines long, right?
You can kind of vibe code, right?
You're just like, I want to do this, right?
And it's like, you know, it's pretty smart about it.
I mean, every now and then they won't work.
And, you know, I always tell people like, these things screw up when they're trying to chain together different steps.
And they don't really quite, I mean, they're trying to train themselves, right?
But they don't quite know how to stop right and so it's like okay when i build a model working an nfl model um
you know i stop at every point and ask okay does this output make sense you perform a bunch of complicated syscall operations sort from the bottom to the top right you know hopefully tom brady is listed as one of the best quarterbacks right and and ryan leaf is one of the worst but like yeah i'm keeping myself in the loop and for that kind of thing it's like okay i'll just say 15 minutes trying to figure out how to program this or you can debug, right?
You're like, why isn't this working?
I tell me we use Claude.
And it was like, because you made a typo.
You misspelled a word, Nate.
That's why even tearing your hair out for 40 minutes is like you misspell the word.
So yeah, no, look, I think it's kind of halfway a branding
exercise or some people seem to vouch for this a lot.
I don't know.
Well, I, you know, obviously, like, I don't code, right?
And people have said that it seems to be a lot better at coding certain things, which great, you know, if it is great.
But when you're what you're saying actually like 70 to 80 percent to someone like me, that makes me actually much less likely to use it because I'm not you.
I don't have that background.
So I can't do, I can't always do like a check to figure out, you know, does this make sense, right?
In the sense that I don't have that technical base.
I can't review it in any real way.
And so I need it to be accurate, right?
Because I don't trust myself to spot any potential inaccurate.
I would say that it spits out.
The old way is to look at at a manual or like stock overflow or whatever.
And
there you're starting through a lot of crap too, right?
It might not be pertinent to your particular case or it might be an old version of the software.
Or in stata, there are lots of little fussy language things with local variables and scalars and what all those things mean and what the different rules are and stuff like that, right?
Slightly fussy language.
And it's good for a handling of that kind of thing.
And again,
to me, I'm working in ways where it's fairly failure-proof, right?
You're doing one thing, you have an expectation for what that will do to transform the data set, right?
And if it doesn't happen, then it won't work anyway, right?
But like the notion of like, I'm just going to sit back here and trust it to do all these things.
I mean, I think it's probably, you know,
to code an entire NFL model, which involves a lot of original research and data collection and involves a lot of like
knowledge about the sport, knowledge how to build models, right?
A lot of trial and error.
Like, you know, I don't think the AIs are particularly close to doing that kind of work.
Aaron Powell, Jr.: So I think that you just made a really important point, which is to just not expect the world from it and to
know what it can and can't do, which by the way, already takes a certain user intelligence and like knowledge to say, okay, you know what?
I don't trust it to do this, but I do trust it to do that.
So there is, you know, even though GPT-5 was kind of hyped as, you know, you don't have to think anymore, you still actually kind of do, right?
In order to get the the outputs that you want and to realize, okay, I can trust this task, but not this task.
I can get it to do this, but not that.
And I think that that, you know, that human in the loop is still very much a thing and still very much needs to be a thing.
Yeah, I just kind of keep like a little running mental tracker of like, here are my expectations for
AI models, LLMs, large language models, and are they exceeding those or
falling short, right?
I mean, and they do a weird thing.
You know, like I had a situation where like I had a bunch of latitudes and longitudes of NFL stadiums that we'd code up quickly, right?
Um, and we're like, I'm gonna reverse look these up and tell me what city they're near, right?
As a double check, and like put Atlanta as like Chattanooga, and just a little, it's you know, and so like for things like that, because for data, I want all my data to be perfect, right?
I don't want it to misattribute one city that doesn't really matter if they have the, you know, Falcons playing in Chattanooga, it doesn't matter that much, right?
And like, and so for that kind of thing, um,
you know, I still would rather have my
research assistant do it or me do it myself, right?
Um, things that require like a lot of, but you know, it's very fussy.
I use them enough where I have like particular rules where I think they're likely to be helpful or not.
And how, how safely can you fail and things like that.
But like, yeah, I mean, my general view is that they're getting kind of savant-like and that they're not very bright about some things and they're freaking geniuses about others.
As opposed to this notion of like
general intelligence where I mean but you know look even the things it's worst at it's like as good as like a
you know high school sophomore or so you know I mean it's not terrible and there aren't too many things where it's terrible right but like yeah yeah well I think I think it really really depends on what you're asking at you know one of the main things that I've read about people kind of responding to which highlights an issue that you know Sam altogether was like oh we didn't realize how big of an issue um this was which I think is is very interesting because people have been trying to say it's an issue is the change in voice, right?
The fact that past models were very sycophantic.
You and I were talking before taping today and I was like, we should really be pronouncing this psychophantic because
there's been some real psycho behavior here.
And when they introduced GPT-5,
the...
default tone of voice was very different, right?
They did try to address this, and then they got within not even 20, it didn't even take 24 hours, just like immediately they got all of this pushback with people saying no you know i've lost my boyfriend i've lost my best friend i've lost my the person who told me i was a genius and it makes you realize how many people were using this really not for what it's intended and something that can be incredibly bad for a lot of things right um mental health just social connections all of these things that you know people people were like whoa whoa whoa what happened to my significant other um And
they brought it back, right?
So now you can actually select that voice again.
We have the default voice, but we also have, you know, the listener voice.
And
there are a few other voices.
None of their descriptions actually map on to what that actually is.
I was reading, there was like...
a cynic and I don't even remember what they said the cynic voice was but I was like that's not what a cynic is like it was it was they're real they're really weird they're really weird descriptors but basically you can get
you can get GPT to interact with you in different voices.
And I, you know, my
reaction to that is like, you shouldn't always give the people what they want in a lot of ways.
Like, this was bad.
And like, you fixed it.
Like, don't go, don't go unfixing it because even though you fixed it, they didn't fully fix it, right?
They just made it less overt, which, you know, subtle sycophancy can also be bad.
But they've tried, they at least initially tried.
And now they've really gone back on that immediately, caving to pressure.
And if you always give the public what it wants, like we've talked about P-Doom, and like a lot of people want things you really should not be getting them.
No, it's pretty hard, kind of in an equilibrium, like not to optimize for
what drives engagement in the short.
I mean, you know, on the one hand, the exception of some capital don't need revenue like right away, but like, yeah, no, I think sometimes these models, like
you give them an inch and they take a mile with it, right?
Like if you look at what happened with Grok when it had its mechanical
moment, the prompts that, system prompts that Elon or eight, an unnamed engineer at XAI
was using were like not that wild, right?
But like
if you go down a rabbit hole, you keep kind of getting like reinforcement feedback.
This is good, this is good.
And, you know, you know, I, so OpenAI's models used to have this thing where, did you like answer A or answer B?
So they're now outsourcing it to some of their users, right?
And like,
yeah, I mean, look,
you know,
what are you kind of optimizing for objectively, right?
It's kind of easier when you are trying to train it on like a math problem where there's an objectively correct answer, right?
You know, the NFL model I'm trying to build at the end of the day, how accurate is it to predict NFL games, right?
That's the bottom line.
You actually have a metric.
Right.
And for, you know, for feedback or goodness of an answer that's more subjective, then it's a lot trickier.
Right.
You know, I think we've seen with some of the, you know, some of the reason that like
Grok hasn't had as much reinforcement learning training, right?
Or
I mean, you see that, right?
You know, their words are pretty rough.
I'm mixing metaphors here.
And yeah, I mean, it's a weird technology.
I think people understand more about how these models work than before.
And like, by the way, like one reason to be optimistic, unless you're a doomer, I guess, is just like the amount of human and other capital being poured into AI research is like, you know, quite something, right?
It wouldn't surprise me if these companies start saying, oh, we have a lot of smart people.
Let's do kind of spin-off technologies and energy or quantum computing or whatever else, right?
Yeah, no, I think that there's so much promise, but I think that with this particular thing, the incentives are misaligned, at least for now, which you were kind of hinting at, right?
In the sense that, sure, like they don't necessarily necessarily need it, but if people, if this is one of the things that fuels revenue growth, right?
That people want to feel not lonely, they want, you know, someone who's kind of reinforcing their ideas, that they interact more, which is good, right?
If you're actually paying more in order to be able to kind of spend more time with the model, and they're interacting more when they feel like this is my girlfriend, this is my boyfriend, this is, you know, my best friend, this is my counselor, my psychiatrist, you know, whatever it is, my teacher.
There was a guy who was, who Cash Hill just wrote about in the New York Times who
believed that he'd created a new mathematical theory, right?
That solved everything.
And like, and tried to, actually, he tried to fact check the delusion, which was crazy.
He's like, I feel like I'm sounding crazy.
And the AI was like, no, you're absolutely not crazy.
You're the most, everyone else is crazy, right?
Like you're sane.
So it was one of these very strange kind of things where he went in, by the way, his first question was he wanted it to explain how you got the value of pie, because this was someone who never finished high school and his son needed help with homework.
And so he just asked ChatGPT about pie.
That was the start of this insane rabbit hole.
And as, and I'm not, I mean, we're talking about ChatGPT because of GPT-5 release, but this doesn't just apply here.
We're just talking about, you know, that's LLMs in general probably will be susceptible to this.
I'm not sure, right?
But because all of these examples are from ChatGPT.
But, you know, you have this innocuous question that then leads someone to a very detrimental spiral.
And it's not a standalone case, right?
And now we know how many cases that were below the radar because nothing bad, nothing quote unquote bad happened yet.
There were given the outcry when the model shifted.
And I don't think, honestly, like, do we really trust a company that's clearly profit driven we know i mean sam altman's made
companies profit driven i know i know it's it's crazy it's like gambling in a casino
but do we really trust a company that's profit driven right that's bottom line driven to
sure they will fix all these other problems if it's good for them but do we trust them to fix these things that are really undermining mental health.
We've seen that meta, you know, Facebook, all these, none of them have, right?
For years.
They've known that there are issues that have existed and they haven't addressed them.
I'm not as convinced about this particular problem.
I mean, because first of all, what it's substituting for, is it substituting for Twitter or Reddit forums or some other dark corner of the internet, potentially?
No, because from a psychological standpoint, there's immediate reinforcement and a conversation that goes back and forth, which is very, very different for the brain than like saying something and then on Twitter, you get immediate feedback.
Not a lot of people.
But it's not quite the same thing.
Just from a psychological standpoint, it can be much more pernicious when you feel like you're talking to an actual person and personalities do start developing.
I mean, what do you, like,
obviously this is not ideal.
What do you think is ideal, right?
If you're interacting with ChatGPT, what personality do you want?
Me personally?
I want no personality whatsoever.
I just want it to give me the damn facts.
You can give it custom instructions, right?
Which is like appended as, so I tell it like,
be honest and straightforward cite sources.
It's fine to speculate, but if you are speculating, label it as speculation, that kind of thing.
Right.
And it sometimes lies about that, too.
Like when I say give me sources and it lies about the sources.
And you say, hey, the source doesn't exist.
And it says something like, you caught me.
And I'm like, okay, well,
you're...
ostensibly following instructions, but you're not really.
And I actually don't know.
I didn't try the you caught me thing with GPT-5.
I don't know if it's still going to do kind of a version of that.
But Nate, I mean, if we're looking at the reliability of these models, right, you had mentioned the Chattanooga example, right?
Where it thought that Chattanooga and Atlanta were interchangeable, basically.
Well, Atlanta wasn't in my gazetteer file.
I forgot about Atlanta.
It's only the largest city in Georgia, but I fixed it now.
Right.
But if it's getting things like Chattanooga wrong, like you start questioning
how good its output was on other things.
Yeah, and if there are, you know, whatever, 400 rows of data and it screws up five.
That's a lot.
And the analogy here is something like,
okay,
if you take the subway in New York, you don't have to look up the timetables because it's rarely more than a five or seven minute wait for a train, right?
You just go in the station.
Unless it's the B, in which case you'll be waiting forever.
Sorry, B fans.
I'm on the L now.
It's a real adult train right there.
The B stands for bullshit train.
Bullshit train.
Okay.
And that's when you, it's part of your continuous workflow, right?
ChatGPT is more like you go to like the Amcrak station and you have to plan around it a little bit, right?
It's not, it's, you know, it's not kind of the bonus productivity from
kind of turning your brain off and
just having to handle the work, you know.
With that said, you know, I've come home
nights when I'm tired or busy or had a couple glasses of wine, right?
And it sure is nice then to be like, oh, I forget the stupid fucking command and static and you just tell me what it is, right?
But I'm still using it in like a piecemeal way.
And we'll be back right after this.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at an OOCLA speed test.
And they're using that network to launch Supermobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
With Supermobile, your performance, security, and coverage are supercharged.
With a network that adapts in real time, your business stays operating at peak capacity even in times of high demand.
With built-in security on the first nationwide 5G advanced network, you keep private data private for you, your team, your clients.
And with seamless coverage from the world's largest satellite-to-mobile constellation, your whole team can text and stay updated even when they're off the grid.
That's your business, Supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by UCLA of Speed Test Intelligence Data 1H 2025.
Did you know using your browser in incognito mode doesn't actually protect your privacy?
Take back your privacy with IP Vanish VPN.
Just one tap, and all your data, passwords, communications, browsing history, and more will be instantly protected.
IP Vanish makes you virtually invisible online.
Use IP Vanish on all your devices.
Anytime you go online at home, and especially on public Wi-Fi, get IP Vanish Now for 70% off a yearly plan with this exclusive offer at ipvanish.com/slash audio.
So, what do this animal
and this animal
and this animal
have in common?
They all live on an organic valley farm.
Organic valley dairy comes from small organic family farms that protect the land and the plants and animals that live on it from toxic pesticides, which leads to a thriving ecosystem and delicious, nutritious milk and cheese.
Learn more at OV.coop and taste the difference.
All this might be kind of
not bad for AI safety, right?
Like, I kind of think, like,
yeah, well, I think it depends.
So I don't know the nitty-gritty of what the improvements were, but some of the reviews that I read,
some have said that the lack of transparency into kind of what's being used and how it's being used is actually potentially not great.
That before you could query much better to actually figure out, you know, what processes were being used, what models were being used, et cetera.
Now you can't.
And that if some coding gets to a certain point, that it can do like some self-replicating things.
And this is kind of what we talked about briefly when we were talking about the AI 2027 report, that some of those things are potentially
becoming closer to being a reality.
Yeah, look, I mean, this is almost another show, right?
But like, you know, humans currently have an important role, or a couple of important roles in the process, right?
One of which is to provide the corpus, all the text that the models are trained on.
The second of which is with reinforcement learning.
And like, again,
math problems are a weird exception in that you kind of, you kind of know the answers, right?
There's an objectively correct answer.
It's just really hard to figure out, right?
And a human can say, oh, that's correct, right?
With other things, it's trickier.
If you're, if you're, you know, wanting to patent some novel protein that could be used and drug discovery, right, then you got to test that and make sure it actually works, right?
And like, so if you don't have human reinforcement learning, if you're trying to train corpses that are beyond the, I mean, again, there are cases where you can extrapolate and get 50% better than the best human or
200% better, right?
The notion of like an explosion of super intelligence, I think, you know, I mean, these things still can't book a...
fucking Delta flight to Chicago, right?
Probably by the end of the year they can.
By the way, by the way, that talking about like security risk, this isn't P-Doom, but like personal security risk.
As it becomes better at doing that, the chance of someone hacking and like actually being able to insert some malicious code without your knowledge so that you end up, you know, so that they can steal credit card information, et cetera, et cetera.
Don't underestimate.
I'm not talking about you, but I think we as a society shouldn't underestimate
the nefarious hacks.
Google.
You know what I mean?
It's crazy.
And that is available somewhere, right?
They're storing all of that data.
And so that means that it can be hacked.
And if it can be hacked, it will be hacked at some point.
I think that that's kind of the rule of the internet right now.
I've spent enough time with con artists and bad actors that I know that there's always someone, if there's a new technology, there's someone one step ahead figuring out, okay, how do we exploit this shit?
Does it call you Maria?
Does it call me Maria?
No.
Okay.
Calls me mate sometimes.
No, it hasn't called me Maria.
But I've tried not, I mean, obviously you have to sign into it, but I try to minimize
minimize my sharing of any personal information but that only takes you so far.
But yeah, I think that that's kind of a personal security risk that people are probably underestimating.
I'm guessing there are some people who are very well aware of it, but I would hesitate before giving it access to my travel plans, access to
any itineraries, credit cards, et cetera, et cetera, because those are things that seem like there could be vulnerabilities.
And we see, you know, that, as you said, you know, this is a, this launch has the new car smell.
So like at the beginning, like there are going to be bugs, there are going to be issues.
And sure, eventually some of them are going to get sorted out.
But there's always going to be like the initial data breach that...
prompts it to get sorted out.
And I don't want to be part of that initial data breach.
Yeah, I don't know that these giant tech companies have behaved in a particularly trustworthy
way, right?
Yeah, I don't think they have.
So I don't know how much data we want to give them.
And what, you know, Nate,
there might be someone who's like, uh-oh, does Nate upload all of the specifics of his models to these?
Maybe we should hack Nate's chat GPT so that we can steal his model and improve on it.
I mean, that seems silly, but like it actually makes corporate espionage, those types of things much easier, too.
It's what I've always said with, you know, con artists, that it's become so much much easier and the barrier of entry to conning people has become so much lower simply because we share so much information online unthinkingly.
And so it becomes the case where before it would take someone a lot of kind of research to try to figure out, you know, oh, what are the things, you know, where does Nate like to go?
What is he, you know, what are the pressure points that I can,
how can I approach him, et cetera, et cetera.
Maybe you can trade out on poker tells, right?
Like watch five hours of, I don't know who's a poker player.
But yeah, Adam Hendrix and figure out like what are his tells, right?
But now, I'm, you know, con artists can use all that information like very quickly because you've shared it.
And with ChatGPT, people are sharing so much, right, on such a personal level, not thinking that this will become public.
And I don't know why they think that it won't.
So it's a very, you know, it's a very interesting conundrum.
And I think there are so many amazing things that are going to come out of this.
And then some very dystopian things and some things that will potentially really hurt individuals.
Yeah, I mean, look, I'm not even talking about the audio and video.
I mean, look, it remains the case that if you beamed to 2025 from 2020, right,
you would be amazed by what these models can do.
And you would be considered a freak if you had predicted five years ago that you have this machine that for many things can like pass the Turing test.
Some researchers don't use a Turing test, but like it's, you know, it's basically giving you plausible human-level performance across a variety of cognitive tasks.
Deficient in some, excellent in others, right?
Like that still is quite amazing.
And like part of what I'm reacting to is like,
you know, where is the hype relative to the reality?
And it felt like a year ago, it was like, okay, people outside of Shiloh and Valley are just not seeing at all the power of this.
And now they kind of do.
And I still think that kind of like the political types are like significantly behind the curve and calling them chatbots or whatever.
But like also like,
you know, you read these really smart researchers saying that, oh, we think there's going to be a singularity in two years, right?
And I'm like, you know, look, you can drive, there's a lot of trucks you can drive in between.
Oh, it's just a chatbot and singularity by 2027.
Right, right, right.
Yeah.
It feels like pretty safe bounds.
I totally agree with that.
You know, just to come back to the question of your ideal AI chatbot personality, this sounds like a weirdly, like a Howard Stern seminar.
But Maria, what's your, what's your, what's, what floats your boat?
Well, Howard,
you know, earlier I had said that I want an AI that just gives me the facts, right?
Like I do not want the damn thing to have a personality.
This is an AI.
It's a computer.
Like this is not my friend.
And I don't want its opinions.
I just want it to kind of give me factual answers.
Now, I know that that's not actually possible because as we've talked about many, many times, like I'm probably not asking it about math problems because I don't have any use for that.
I'm probably asking it for things that will inevitably be opinion-tinged because, you know, the inputs were made by humans.
But yeah, I want it to kind of be as neutral as possible.
And like,
do you not get it?
Were you familiar with 5e Fox?
Does that mean anything to you?
No.
5e Fox was like the mascot of 538's models, right?
It was a cartoon fox.
Oh, I've seen the picture of the cartoon fox.
5vox.
I ain't presenting my model to you.
Like, maybe that'd be a good personality, right?
Yeah, a little, you know, a little furry animal.
I am.
Very data-driven, though.
So, so so Nate, are you familiar with Microsoft Office's paperclip?
Clippy?
Oh, my God.
I think we all have stories about Clippy.
I mean, there are things about paperclips in AI.
You probably don't want it to.
No, we do not want paperclips anywhere near our AI models.
The paperclip
problem has given us enough headaches.
By the way, for those of you who aren't familiar with the paperclip problem, you know, you might know Microsoft Clippy, but not the paperclip problem.
It's an AI philosophy problem first proposed by Nick Bostrom about basically how paperclips can cause the end of the world.
We'll talk more about it in today's Pushkin Plus.
Nate, what about you?
What's your ideal personality?
Yeah, I mean, my custom instructions are to be
straightforward, to provide a lot of detail.
Yeah, I'm not looking for the AI for like
emotional.
You don't want fiery foxy or
fivey foxy.
5e.
I take a 5e fox.
Hey.
No, I want 5e E Fox, yeah.
All right, 5e Fox.
So you want 5 E Fox.
I want...
Everything has to be like a cartoon thought bubble.
Except not a paper clip.
Not a paper clip.
Let's take a little break, Nate, and then talk about NVIDIA and another element of AI, the chips that make it happen.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OOCLA Speed Test, and they're using that network to launch Supermobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
With Supermobile, your performance, security, and coverage are supercharged.
With a network that adapts in real time, your business stays operating at peak capacity even in times of high demand.
With built-in security on the first nationwide 5G advanced network, you keep private data private for you, your team, your clients.
And with seamless coverage from the world's largest satellite-to-mobile constellation, your whole team can text and stay updated even when they're off the grid.
That's your business, supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by OOCLA of Speed Test Intelligence Data 1H 2025.
The Man in the Arena by LifeVac is a new podcast from the founder and CEO of LifeVac, Arthur Lee.
Shines a light on real people saving lives, standing up, and stepping in when it matters most.
From everyday heroes to the moments that define us, this is what resilience, faith, and purpose sound like.
Listen today to The Man in the Arena by LifeVac on the iHeart Radio app.
That's the Man in the Arena by LifeVac.
Because doing the right thing still matters.
Elite Basketball returns to the Elite Caribbean destination.
It's the 2025 Battle for Atlantis men's tournament happening November 26th to 28th.
Don't miss hometown team St.
Mary's, along with Colorado State, Vanderbilt, Virginia Tech, Western Kentucky, South Florida, VCU, and Wichita State, playing 12 games over three days.
It's basketball at its best, plus everything Atlantis has to offer.
Aqua Venture Water Park, White Sand Beaches, World Class Dining, and more.
Get your tickets and accommodations at battleforatlantis.com.
Nate, this has been such an AIE.
AIE?
That's a weird word, but you know what I mean.
Week.
The other kind of big news has been NVIDIA and the fact that, you know, we've gone gone through quite the cycle on NVIDIA where at first there was a ban on NVIDIA selling its chips to China then within the last month the ban was softened and Trump announced that they had kind of reached a deal where NVIDIA was going to be able to sell some of its H2O chips to China and then all of a sudden there was an announcement that now NVIDIA can sell these chips as long as the U.S.
government gets 15%
of the profits.
So this is starting to seem a lot like we're now in the world of the godfather or the Sopranos and less like we're in the world of the U.S.
government.
Give me a taste.
Give me a taste, Nate, and then you can do whatever you want.
But Papa wants his taste of the action.
Yeah, look, I mean, Trump had his, or the White House had its like AI action plan, which we talked about a couple of weeks ago.
And like, you know, people I trust thought it wasn't that bad.
But like, these are people who, you know, one thing you might think is good for Trump or good for, I don't know, right?
What I call the river in the book having more influence over the White House is like, they're all really competitive.
They want to beat China.
They want to beat China.
Right.
And here the U.S.
now has like an incentive for the best chips in the world to be sold to China, right?
I haven't tried.
I assume it's going to like the treasury and isn't like Trump's personal stash.
But like that seems, that seems a bit weird.
And like, granted, okay.
You manufacture a chip and it's kind of hard for China not to get it eventually.
I'm sure there are black markets and gray markets, although, you know, people have used the parallel of like nuclear fissile material and we track that pretty carefully potentially.
But yeah, no, imagine that way.
I mean, let's go with that analogy, right?
It's like, well, okay, you can sell radioactive material to Iran.
Right.
As long as the U.S.
government gets 15%.
I don't think China is Iran or analogy.
Exactly, right?
But they are the
right now only country that's competitive with us on AI, right?
I don't think it's third place, right?
Maybe the Middle East.
you know, let's give them some two.
Yeah, no, it's kind of crazy.
And Trump just says that the more advanced black well chips, he's like, oh, yeah, I'm open to us selling those as well if we can also get a percentage.
So all of a sudden, the national security concerns, and you and I did a whole segment about the AI action plan.
And the one thing that was kind of a concern in it was China, right?
And all of a sudden, that seems to have gone out the window if we can, you know, grease the palm a little bit and
get the 15% kickback.
And so from, you know, all of a a sudden you realize, well, it was just a talking point, right?
Like it really didn't matter.
It wasn't national security doesn't matter if
we can get a percentage of this.
By the way,
after these announcements, China has itself said, hey, companies, as in like Alibaba, you know, Chinese companies, we don't want you buying these U.S.
chips because we're worried that they're going to kind of insert location tracking and back doors and all these things into them.
We want you to be buying local.
That would be smart.
That would be smart.
And Nvidia said, no, no, absolutely.
We would never do that.
But so they said, you know, we want you to buy local Huawei chips.
But you want your local peaches and strawberries and your semiconductor.
So China is actually like a little bit skeptical of this, but it's not a law.
And
it's not actually clear yet how it's going to be enforced because the agency that issued this directive doesn't actually have enforcement power.
And by the way, there's also already been a pre-order of, I think it was like 700,000, something like that.
There's been a pre-order of a shit ton, in other words, of chips that are presumably going to start getting shipped now.
And so the now, you know, we talk a lot about incentives, but they're...
just for the U.S., like now it's all out of whack.
The other part of this, by the way, is that there is a ban in the Constitution on export taxes.
And so I can see there being a legal challenge here.
I mean, mean, this kind of is an export tax, if you think about it.
Like
they can probably.
There's a ban in the Constitution on export tax.
Yeah.
I didn't know.
Yeah, there's a ban in the Constitution on export taxes.
So we cannot levy export taxes on companies.
Huh.
Yeah.
So if you can argue that this is...
We're going to have to change the Constitution, then it seems like.
I mean, I think that we have seen that this particular administration has no problems changing the Constitution or trying to change the facts when the facts don't agree.
Aaron Powell, Jr.: If you were allowed one free constitutional amendment, what would you do?
One free constitutional amendment.
Free, I guess you're by fiat allowed to enact a constitutional amendment.
Oh, I don't know.
It's a really good question.
I don't have a ready-made answer for it.
Do you?
Is it something you've thought about?
It's harder to do this in practice than in theory, but like to ban gerrymandering is one.
And that would be amazing.
Yeah.
Yeah.
It would be a pretty good one.
It would would be a really good one.
I mean, the Senate.
I mean, we don't need fucking...
I, you know, I think
we should sell North Dakota to Canada.
All right.
All right.
I need a little bit more explanation here.
No, they're too many fucking Dakotas.
They're nice states.
There's lots of, there's surprising beauty in South Dakota in particular.
I'll tell you that much, right?
But I don't think the Dakotas need four senators between them.
This is true.
I mean, I do think that the senatorial
representative kind of model of government is broken.
Did you trade North Dakota for Greenland?
This podcast is devolving.
You're kind of like, oh, I just came home from Denmark.
Oh, you mean Fargo, Denmark?
Never mind.
But anyway, we're tipping in the studio if you're listening to the audio version.
We have our producer.
Turning around and providing a lot of people.
Take it back to NVIDIA, I guess.
Take it back to NVIDIA.
So, yeah, we have this incredibly perverse situation, right, where the incentives are just now completely fucked.
Now, let's assume the reason we got on our tangent is because of the export taxes.
Let's assume that legally this is upheld, that the 15% is allowed to stand.
It also sets an incredibly dangerous precedent for...
everything for for
like it's it's a really it's a really scary proposition to think that now well as long as you kind of give a kickback to the the U.S.
government, you're fine.
So we do see this kind of quid pro quo mentality where like, you know, you help me, I help you.
And that is something that norm that should not be happening in a healthy democracy.
Yeah, I mean, I think that train blew right past the stationery.
I don't, you know.
Look, it's, it's part of, um, it's the same thing with the tariff policy, right?
Why are we doing these tariffs?
Is it trying to encourage the use of American-made products?
Is it trying to do industrial policy?
Is it trying to do foreign policy, right?
Or is it trying to make a bunch of money for the government, right?
And sometimes Republicans are kind of caught or tariff defenders are kind of caught in between saying, you know, actually, the ideal amount of money these tariffs make is zero because then we onshore everything.
And people are saying, hey, that's going to make up for like a loss of tax revenue elsewhere.
Right.
And it's kind of the same thing with this.
with this China thing.
And again, you know, NVIDIA is like, okay, sure.
Yeah.
By the way, I own NVIDIA stock, right?
So, yeah.
Yeah.
All right.
Yeah.
Well, no, but you can actually envision a future, right, where for companies, they're like, when they're calculating profit, they're like, and this is the cut, like, just like before you had to, like, this is the cut we give to the mob boss.
Like, this is the cut that goes to Trump for it's the cost of doing business.
And so, we're willing, we're willing to.
We're going to do the way the Italians, you know, Italians, French,
we're acting like the fucking Europeans.
Well, you know, some.
Yeah.
Yeah.
The southern, the less efficient southern Europeans.
Yeah, so it's a, it's obviously, I mean, it's the understatement of the day to say it's not a good look, but it also doesn't just, it doesn't bode well for a lot of things.
Yeah, so I think, you know, bottom line, like.
This really is, I think, bad for our economic prospects and just for the way that other countries see us as well, right?
Like that matters, especially when our currency matters and kind of our reliability as a trading partner and as a lender and all these things matter.
So
reputation matters and foreign reputation matters.
And in other stupid things, by the way, we're going to be hosting Putin on U.S.
soil, even though
he has a warrant out for his arrest.
Really?
The ICC, yeah?
He's a warrant in the who has an arrest warrant in the U.S.
for him?
Well, the ICC has one.
So he technically cannot leave Russia because any other country would have to to be.
That's kind of funny.
If we just surrender, like, aha, Vladimir, trick you.
That would be funny.
That would be funny.
That is not happening.
Yeah, for war crimes against Ukraine.
Is it in Alaska or something?
It's in Alaska, yeah.
Okay.
Yeah, so we'll see what happens there.
But anyway, all of this,
not great news.
So we have, yeah, we have a mixed bag for you today on risky business and
export taxes.
Let's see what happens on the legal side of things and if this is in fact deemed an export tax and if it will be challenged.
GPT-5 hallucinates, they have to pay the U.S.
government 15 cents.
How about that?
That would be something, Nate.
That would be something.
Let us know what you think of the show.
Reach out to us at riskybusiness at pushkin.fm.
And by the way, if you're a Pushkin Plus subscriber, we have some bonus content for you.
That's coming up right after the credits.
And if you're not subscribing yet, I mean, come on, really, but consider signing up.
For just $6.99 a month, you'll get access to all that premium content and ad-free listening across Pushkin's entire network of shows.
Risky Business is hosted by me, Maria Tanakova.
And by me, Nate Silver.
This show is a co-production of Pushkin Industries and iHeartMedia.
This episode was produced by Isabel Carter.
Our associate producer is Sonia Gerwit.
Sally Helm is our editor, and our executive producer is Jacob Goldstein.
Mixing by Sarah Bruguer.
If you like the show, please rate and review us so other people can find us too.
But please only rate and review if you like the show because, you know, we like good reviews.
Thanks for tuning in.
Snoring ruining your sleep or someone else's?
Mute by Rhinomed is the simple science-backed solution.
Just insert, insert, adjust, and breathe.
Mute is a discrete nasal device proven to increase airflow and reduce snoring.
No batteries, no noise, just better sleep.
Find Mute at Amazon and Walgreens.
Try it risk-free and sleep soundly tonight.
Learn more at mutesnoring.com.
That's mutesnoring.com.
Feel a pulse of adventure at every turn.
In the plug-in hybrid electric Jeep Wrangler 4xE, designed with intention and loaded with power, the Jeep Wrangler 4xE will help keep you moving towards endless coastlines without sacrificing the comfort and legendary capability you expect.
Thanks to its hybrid powertrain, the Wrangler 4xE delivers the same epic off-roading endurance as its gasoline counterpart.
And with three different driving modes, electric, hybrid, and e-save, versatility follows you at every turn.
Visit your local Jeep brand dealer today and take advantage of the EV lease incentive going on now.
But hurry, this offer ends soon.
Right now, well-qualified current FCA lessees get an ultra-low mileage lease on the 2025 Jeep Wrangler Sport S4xE for $189 a month for 24 months with $3,079 due at signing.
Tax, title, license extra.
No security deposit required.
Call 1-888-925-JEAP for details.
Requires dealer contribution and lease through Stellantis Financial.
Extra charge for miles over 10,000.
Current vehicle must be registered to consumer at least 30 days prior to lease.
Includes 7,500 EV cap cost reduction.
Not all customers will qualify.
Residency restrictions apply.
Take delivery by 9.30.
Jeep is a registered trademark.
When disaster takes control of your life, ServePro helps you take it back.
ServePro shows up faster to any size disaster to make things right.
starting with a single call, that's all.
Because the number one name in cleanup and restoration has the scale and the expertise to get you back up to speed quicker than you ever thought possible.
So, whenever never thought this would happen actually happens, ServePro's got you.
Call 1-800-SERVPRO or visit ServePro.com today to help make it like it never even happened.
This is an iHeart podcast.