Should You Learn Poker from ChatGPT? And Other AI Questions
In the wake of DeepSeek’s explosive entrance into the AI chatbot world, Nate and Maria talk AI strategy. At the national level and the personal level, what benefits does this technology actually have – and is it worth it?
Plus, we tackle a question from listener Hugh about how to win big at his friendly, amateur poker night. Good luck, Hugh!
For more from Nate and Maria, subscribe to their newsletters:
The Leap from Maria Konnikova
Silver Bulletin from Nate Silver
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
Pushkin.
This is an iHeart podcast.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OoCla Speed Test, and they're using that network to launch Supermobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
That's your business, Supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by OOCLA of Speed Test Intelligence Data 1H 2025.
The wait is over.
Streaming live from the legendary iHeartRadio Theater in LA.
The top 12 artists hit the stage for one career-defining performance.
The judges will crown the winner, and you'll help choose the People's Choice Award.
Don't miss it.
September 26th, 7 to 9 p.m.
Pacific.
Follow at TikTok Live underscore US and watch it all go down.
Only on TikTok Live.
When you buy business software from lots of vendors, the costs add up and it gets complicated and confusing.
Odoo solves this.
It's a single company that sells a suite of enterprise apps that handles everything from accounting to inventory to sales.
Odo is all connected on a single platform in a simple and affordable way.
You can save money without missing out on the features you need.
Check out Odoo at odoo.com.
That's odoo.com.
Welcome back to Risky Business, our show about making better decisions.
I'm Maria Konakova.
And I'm Nate Silber.
Today on the show, we're going to be getting into it on AI.
There has been a lot of news on the AI front in the last few weeks, some of it coming from the Trump administration and some coming from overseas with China and DeepSeek.
Maria thinks AI is overrated.
I think AI is properly rated, as you'll see.
And then we're going to get into a listener question about poker
and how to beat your local cash game.
Let's start with our friend, artificial intelligence.
Yeah, so I think we're talking about a few different things, right?
So we have the Trump initiative on AI.
So we have a few things that he did,
including
things that he
took away, right?
The executive order that took away certain restrictions on AI that had been put in place by Biden.
But then also we have, you know, this big funding initiative into AI by the US government, Stargate.
And then we also have AI coming out of China, DeepSeek, which has freaked everyone the fuck out.
I think that's the scientific way of putting it and has made markets crash, NVIDIA stocks, a lot of other stocks, NASDAQ.
people have not been happy to see the success of DeepSeek.
So there's a lot to talk about today.
Where do you want to start, Maria?
Well,
do we want to start with, I think it might make sense to start with DeepSeek because it actually,
you know, I think that the two are very related, right?
Because we have what the US government is and is not doing.
And one of the reasons that right now things are freaking out is because of DeepSeek.
So I think we can start with that, see what the implications are, what the responses can be, and then kind of see what the U.S.
has done so far to see if it's in line with what we think the correct strategy should be.
Yeah, look, until about
a week and a half ago, when people started to notice DeepSeek,
their model R1 in particular, the conventional wisdom was that like America is way ahead in the AI race.
I should say with respect to large language models, machine learning transformers, right?
You know, driverless cars is a different enterprise than drones and things like that.
But in terms of LLMs, large language models, ChatGPT-like things, then, you know, U.S.
probably one, two, three, four in the rankings.
And so, this, yeah, this has interesting geopolitical implications.
Yeah, and
one of the reasons, just to step back, that the U.S.
was assumed to be ahead was because the United States had made it more difficult for foreign governments to acquire, like China especially, to acquire the chips that are necessary to build these large AI models, the resources, et cetera, et cetera.
And so, one of the disconcerting things to the United States was that when DeepSeek announced its results, it also announced that it basically was able to do this at one-tenth of the cost and resources that other models had used.
So this was like an, oh shit, you know, even if we restrict access to all of these other things, they're still able to do this.
Now, I will say, and other people have pointed this out, we don't actually know how much we can trust the numbers and figures, right?
This is just a claim.
We don't know what the training materials were.
We don't know what the development costs actually were.
We just know what they claim that they were.
So I think that this is something that we should put an asterisk next to because it is important to realize, right, that
if you can't actually trace the information and verify it and you have to take it on faith.
that's never a good way to take information, right?
That's one of the things we say over and over on risky business.
When you make decisions, try not to take things on faith.
Try to verify.
right?
That's much better.
Yeah.
So this is basically built by a
Chinese hedge fund, right?
Which is well, which is well capitalized.
And, you know, there are a lot of smart computer engineers in China, and I think they were all working on this project.
So
kind of the last step, the training, I mean, it's a little bit like,
do you know who Rosie Ruiz is?
I do know who Rosie Ruiz is.
Yeah, if you've covered like fraudsters, right?
And I don't mean to say it's a fraud, but it's like, so she runs, what is it, New York Marathon, right?
Who like,
by the way.
No, so Rosie Ruiz actually figures in my next book on cheating.
So Rosie Ruiz
won, quote unquote, I'm putting this in quotes, the Boston Marathon, women's
time
with this incredible story.
It was amazing, wonderful.
You know, people loved it.
And then it turned out that she took the subway for a huge portion of the race.
And, but she almost got away with it, which is the really fucked up thing.
The reason she got caught was because the subway car that she happened to get on had a reporter who was a photographer who had been covering the marathon and was like, wait,
what's going on?
So, it actually took them multiple days to figure out that Rosie Ruiz did not actually win, did not actually run the time that she said she ran.
By the way, this was back in 1980, so, right?
The technology was different.
People were not getting tracked um as closely as they are right now but spoiler alert for you know or or big uh
fun thing that i get into in my book it's possible even today to pull a rosie ruby so so that's for later but nate why are we talking about rosie because it's a little bit like if i don't say there's any actual fraud.
I really don't trust anything coming out of mainland China.
Yeah.
But like,
but yeah, so it's a little bit like if you run, enter on the 24th mile and then run two really good closing miles, it's still not the same.
And it's like not like an accurate representation to say that just the cost of this training run when you have all these resources behind it and it's kind of like the last step.
But clearly it's more efficient in terms of, you know, when you run a
request, put in a query, how many computer cycles is it burning through?
I'm using very non-technical language here, right?
You can actually host deep seek, which you keep wanna to call deep stack?
There are a lot of deep stack hook returns.
You can host deep seek at
on a desktop, right?
If you want to actually have it say things about Tiananmen Square, for example, then you need to run your native instance because the official Chinese hosted web version doesn't like to, you know, doesn't like to talk about certain things, I would say, Maria.
You don't say that.
You don't say.
So then there are a lot of debates, though, about what's it mean if it turns out that, I mean, so one category of debate is like, does the U.S.
lead lose its lead versus China?
That's one kind of whole bucket, right?
Another bucket is like, what's it mean if you now like run an AI lab on a desktop and they're only going to get faster?
How do we regulate this?
And then there's a whole bunch of economic stuff about like,
you know, what's this mean for the price of different assets, right?
So NVIDIA,
for example, is the largest manufacturer of semiconductor chips.
It's a Taiwanese company.
You know, so if compute is cheaper, is that good for them or bad for them?
It's not necessarily straightforward, right?
Yeah, it's absolutely non-straightforward.
And also,
you know, one of the
other potential things that happened with DeepSeek, we don't know, is a process of training known as distillation.
So that's a slightly more technical term.
But what it actually means is that you are training your model on the outputs of other models, right?
So you can actually pass the benchmarks and be able to train up more quickly because you are using using that.
Okay, so maybe now we are having more of a Rosie Ruiz situation.
And if that happens, then it is more of a Rosie Ruiz situation.
And just as a flag, that's not legal, right?
Technically, you're not supposed to do that.
But if you do do it and you're able to then
give these outputs,
the respect for intellectual property varies by country.
So I don't want to be, I don't want to bad mouth a culture for not respecting intellectual property rights.
But yeah, if you're, if you're, you know, if you can just copy off ChatGPT or infer what its weights are, then that, you know, I mean, but that's that's a, that's an issue.
Yeah, it absolutely is.
But let's, let's flip that actually around a little bit to talk about some of the potential positives of this, which, you know, there, we're obviously seeing, you know, potential red flags and negatives.
But what about, I mean, I actually think that it's not a bad thing that DeepSeek is open source, right?
So people can try to figure out, okay, what is it actually doing?
How is it actually doing it?
Isn't that good?
And I know that, and I know that one of the things that, you know, that people don't often like is open source, but there's like open source can go either way, right?
Open source, bad if it's US and China's getting its secrets, but you know, open source, good in other respects.
And I think open source is one of the few ways that we can actually peek inside the black box.
And I would be much more concerned if it wasn't even open source, right?
If we had no idea how it was trained, if we didn't know about the funding, if we didn't know any of this, and it wasn't open source, right?
If all of those things were compiled, I'd be more worried.
Since it is open source, I actually think that could potentially be a good thing in terms of knowledge sharing and trying to figure out how do we make these processes more efficient.
And Meta has actually created a workforce that is studying the deep seat processes and figuring out, okay, how can we use these?
And do we want to change the way that we're developing some of our LLMs, some of our chat models to mimic their processes so that we become more efficient?
And if this process actually means that we use fewer resources, that's obviously a net positive for the environment.
However, if it means proliferation of AIs everywhere and it's cheaper, then could actually be a net negative.
which is all to say that this is really complicated, right?
This is not black and white.
And when you're making these sorts of decisions, you have to assign weights to all of these different outcomes, all of these different probabilities.
And frankly, I don't have the, I don't think anyone has the expertise to do that because it's such a new world.
I certainly don't have the expertise, but I don't think anyone can see the future clearly enough to figure out, you know, how do we weight this?
Because there's a lot of uncertainty around this.
So the hardcore AI concern people, the doomers,
tend to be anti-open source.
I mean, OpenAI was founded as Open AI,
but no longer kind of abides by that.
mission.
And
I think their view, let me try to give you the summary slash kind of steelman version of it, right?
These are people who think, okay, this period where AI is being birthed, where artificial general intelligence ranging up to artificial superintelligence, the first means it can do most things at a human level or a human level.
plus, right?
And super intelligence means it achieves breakthroughs that no human can across a broad range of fields, basically, right?
It thinks this process of giving birth to these models is going to be very dangerous.
And therefore, you want to have it in the hands of as few people as possible and trusted people as possible.
I think at one point, Sam Balton was trusted with this community.
You know, Anthropic
is a bunch of people who left OpenAI because they thought OpenAI was moving too fast.
Right.
And then...
And then Google Gemini, you know, Google, again, had been fairly conservative about how it moved on AI, didn't want to, you know, it's a huge enterprise, doesn't want to risk that and other things and kind of a different culture, I would say, at Google.
Then Facebook or Meta.
The reason why their models were open source is because like they weren't competitive.
This is what people would say in the AI expert community, right?
It's because they weren't as good as these, you know, clearly as
Anthropic, which has Claude or OpenAI and clearly the two leading ones.
And then and then, you know, Google, Gemini had some problem with like drawing like woke Nazis and stuff like that, which
I, you know, I think it's not as good as the other two by a fair bit, but like, but that was, that was considered in the pecking order, right?
That Microsoft was, excuse me, that Meta, too many M's, was fourth.
And therefore they open source it to kind of say, okay, well, we're going to like, in some sense, it's, it's
not quite like sabotage, right?
But you're like, well, fuck you.
You're not going to get these same returns to us and we'll have applications people who want to pay for it for free or whatever right um so the fact that it's open source is interesting i mean you know china clearly um
is willing to do things uh
to undermine
the american economy even if so i mean the other big one in the news is of course tick tock which is owned by bike dance right the u.s congress passes a law upheld by the courts that says you have to sell to um
to an American company or at least to a country not listed on like the and there's like literally like an enemies list of countries and China is on it right um
and ByteDance says we have this really valuable enterprise but we'll just turn it off right if you make us sell it then we'll just turn it off and clearly I mean obviously it would be a forced sale and you wouldn't have quite the market clearing price but like you know the value is a lot more than zero and they're willing to so you know there's a little bit of suspicion that like this is just meant to like undermine
America's lead in AI and not make a whole lot of profit for China.
That's kind of like addition by subtraction, right?
You know, and people say, hey, these people are idealistic.
It's not the same capitalist system there.
And even in the U.S.,
sometimes the founders aren't that greedy.
They just want to make a really cool product.
So there might be some of that too, but it's in line with previous Chinese strategy, I suppose.
Yeah, it's actually interesting that you mentioned TikTok because this is another kind of strategic element of this that, you know we're we being the U.S.
is has moved to ban TikTok right now.
Sorry, what?
I'm the USA.
You are woing for the USA.
I got it.
Nice, nice.
All right.
Team USA is
was moving to ban TikTok.
We have no idea what's going to happen with that now.
But in some ways, you know, and DeepSeek is just like, la, la, la, la, la, right?
There's no movement in that direction.
And instead, Trump is talking about tariffs and other things and
trying to kind of get at it that way.
And I think that it's a very interesting dichotomy where like, if you're worried about TikTok, like, shouldn't you be worried about the strategic and security risks of, you know, of a company that
runs, you know, that all of the
AI models are run in China, right?
Like, that's Chinese-owned, Chinese-developed.
Like, do you, if you're going to be consistent, like that, that seems to be a much greater risk than TikTok, to be perfectly honest.
And we'll be right back after this break.
In today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OOCLA Speed Test.
and they're using that network to launch Supermobile, the first and only business plan to combine intelligent performance, built-in security, and seamless satellite coverage.
With Supermobile, your performance, security, and coverage are supercharged.
With a network that adapts in real time, your business stays operating at peak capacity even in times of high demand.
With built-in security on the first nationwide 5G advanced network, you keep private data private for you, your team, your clients.
And with seamless coverage from the world's largest satellite-to-mobile constellation, your whole team can text and stay updated even when they're off the grid.
That's your business, Supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by UCLA of SpeedTest Intelligence Data 1H 2025.
The wait is over.
Ladies and gentlemen, the next up live music finals are here.
On September 26th, TikTok Live and iHeartRadio bring you the biggest night in live music discovery.
Streaming live from the legendary iHeartRadio Theater in LA.
The top 12 artists you've been following will take the spotlight for one final career-defining performance.
Judged by music gurus and industry powerhouses.
Tom Pullman, chief programming Officer at iHeartMedia.
Beata Murphy, program director of 102.7 Kiss FM.
Justina Valentine from MTV's Wild and Out.
And viral guitarist John Dretto.
Hosted by iHeartRadio's JoJo Wright and EJ.
This is the ultimate showdown.
The judges will crown the next up live music winner and you have the power to decide who takes home the People's Choice Award.
Don't miss a second.
Follow along at TikTok Live underscore US.
And be there live, September 26, 7 to 9 p.m.
Pacific.
Together, let's witness the dawning of the next music superstar.
Only on TikTok live.
As many of you know, I spent a lot of time studying what really makes people happy.
What works, what doesn't, and why.
And here's the truth.
It's not about having the perfect home or perfectly plated food.
It's about connection.
One of my favorite ideas is something I call scruffy hospitality.
inviting people over even if things aren't spotless or fancy.
Because science shows that just just gathering, laughing, chatting, maybe even cooking together gives our well-being a real boost.
That's why I love what Bosch is doing.
Their quality refrigerators use VitaFresh technology to keep fruits and veggies fresher longer, so you always have something on hand to pull together a meal.
And when you cook with fresh ingredients, you're not just making a meal, you're showing people they matter.
Plus, meals made with real fresh foods actually promote more energized and joyful interactions.
Bosch appliances are designed to keep things running smoothly, so you can stress less and focus more on what really counts, the people you're with.
To learn more, visit BoschHomeUS.com.
So what do we, like, what can we take from this and from what's happened in the last week?
How does that mesh with the types of endeavors that
Trump and team have already put forward.
Now, one of the things, so I started off by saying that they've rescinded the executive order.
The executive order that Biden had on AI did have to do with open source, right?
And it did have to say, it was actually very skeptical of open source as well, because it wanted, you know,
full reporting, knowing everything that was going on, but let's keep that information from foreign governments.
Now that's out, right?
So that's been rescinded.
So what's,
and that's one of the main issues that we're seeing with DeepSeek.
So what do we think about that?
What do we think about the government approach?
Is it misguided?
Is it on the right track?
And if we want to, I think all of us, no matter if you're yay, AI or nay AI, I think everyone wants to minimize P doom.
Like I would hope that no one wants the world to be destroyed.
So I don't think anybody thought that like
this Biden executive order is going to stop
P Doom by itself.
The California law that was vetoed by Gavin Newsom probably is considered a bigger deal.
It was a state law, but they're all based.
If they want to operate in California, then they were subject to it, right?
Like that might have been a bigger deal.
But like the general pattern here, with the California law failing, with this thing being rescinded,
with Sam Altman being
less and less, shall we say, concerned about safety, right?
We're going to get this AI race and this idea that you could
stop it by having
only, you know, three operators until we reach some point where safety was achieved is is not going to happen
clearly, right?
You know, like if you're any other technology, you would say, because one concern I have about AI, I heard about this at the newsletter this week,
you know, when you have these very big, powerful companies that have this lead in computing power and engineering talent, right?
Usually that's not how it works, right?
You have the next big thing and the next big thing is created by new companies because the old companies are
stodgy and bloated and it's not their mission in the first place and they're not cool so don't attract young talent right so um to some extent the fact that like
you know it can be disrupted might lessen the worry about hegemonic concerns over AI and the kind of paternalistic slash people get very rich off it concerns about AI but like but yeah I mean look I think we're a long way from achieving
artificial super intelligence, which is the superhuman capabilities, right?
But there are ways that AIs can be dangerous, far short of this, right?
Teaching you how to like mix chemical compounds or build a pipe bomb or things like that, right?
Or can aid in a bit like suicidal thoughts and like then those use cases are going to be like
they're already happening.
They're already happening.
We know that the guy who blew up the cyber truck in front of the Trump Hotel used ChatGPT to figure out how to do it, which was actually probably one of the reasons it wasn't more destructive because the instructions were not very good.
So I think
we should be grateful for the limitations of AI models for now.
But yeah, no, there are,
I think that there,
to me,
that's actually one of the more interesting points is that
we're worried about p-doom, about, you know, super intelligence, all of these things.
But I think that in the shorter term and potentially in the longer term, we need to be more worried about stupidity, right?
About the fact that
they aren't super intelligent, right?
And
people can misuse them and they can give flawed outputs.
And as they become more and more dominant, mainstream, used in searches,
used in day-to-day stuff, but also used higher up for people who
want something to summarize research for them, et cetera, et cetera, that the problem is going to be kind of much more mundane, right?
That
it gives you bad information, it gives you bad instructions, it
elides over something, it doesn't quite synthesize something correctly.
I think that that is actually
the more pressing problem and it's not p-doom, boom, we're going to blow up, but is a kind of a smaller p-dooms.
It's already lowercase, but like subscript p-doom
on a day-to-day basis,
depending on who uses it and for what.
I've kind of flipped on this a little bit where
I think the hallucinations are an overrated problem i think what the models are doing is is
very impressive they have fewer hallucinations than before um
and you know i use them a lot um just for like research and problem solving a little bit of programming and things like that and i think you know the one i've used the most is o1 which is the latest public build of um chat gpt um of open ai um
it just can do higher level shit pretty well you know It also can catch itself in midstream.
They put some routine in where it like, well,
typically a large language model, the reason why it seems like it's printing one word at a time is because like that's actually kind of like
how it works, right?
It literally
it literally kind of goes sequentially.
It compresses its whole text string, puts it through transformer, right?
But then, you know, then kind of it is recursive how it puts it out.
It doesn't think of it all at once, right?
But now they've trained it to actually go back and check its output to check for hallucinations.
And it catches them a lot of the time.
But not a lot of the time, right?
So I actually just did
some test runs prior to taping this so that I could see what was going on right now.
And when it's in my area of expertise, I catch a lot of inaccuracies, not just hallucinations, but things that are almost right, but kind of misunderstood the point, right?
Which, which is actually, which could be even more problematic.
So, I had it do some test runs in psychology where, first of all, it did hallucinate.
It still is making up studies, making up data that does not actually exist.
When I try to, I'm like, oh, this is really cool.
I want to look it up.
No, it doesn't exist.
But it also miss, like, it actually misinterprets findings and doesn't always get it right.
This is my expertise, right?
I have a PhD in it.
So I can figure this out and be like, you know what?
This is, it's pretty good, but not actually like, you don't want to rely on this.
And this is just outright wrong.
But because it's overall, it seems pretty good, I don't catch it when it's not my area of expertise.
And I just assume that it's pretty good.
And that pretty is doing a lot of heavy lifting.
What about if you read a New York Times or Washington Post article?
on poker or sports betting, right?
Or some field that you know a lot about that's medical.
But it doesn't actually hallucinate.
It's not going to really, it's not going to tell me about findings that don't exist to prove its point.
It can tell a lot of white lies, though, and misrepresent and be fundamentally dishonest.
Right.
Well, sure, you have a problem of reporting bias always, but there's a problem when you think something is factual information and it's not, right?
When I'm reading an op-ed, I know it's an op-ed.
When I'm reading an article, but when I want this for facts,
when I want to know.
That's not the use case, though.
You can't put everything in the microwave oven.
Of course.
If I want to learn about a field, so let me give you another example.
I didn't just use psychology.
I asked about,
I wanted to know about a specific type of company.
I'm not going to give you kind of the search that was making a specific type of crypto investment.
So I asked, what's a list of companies that's done it?
Gave me a list.
I was like, great.
Now will you tell me what the specific investments are?
And it's like, oh, sorry, we don't actually know.
Like, we don't know that these companies have made any investments.
We just gave you a list.
And I was like, okay.
Yeah, you're using it wrong.
There are a lot of situations in life where precision isn't that important,
right?
You know, I mean, for example,
I was, as you may know, listeners, I was in Korea and Japan recently.
And like something I've been lazy about, I never taught myself how to like distinguish Japanese.
Chinese and Korean characters.
And so I like told ChatGPT, I'm on the flight to Tokyo.
I'm like, hey, give me a little very quick summary and give me a pop quiz.
And like, you learn it in like 10 or 15 minutes, right?
And they're like, you know, I don't have to like distinguish those characters at 100% accuracy, but it's like basically pretty good.
And like, I can tailor exactly how
that resource is geared toward me.
No, and I'm not saying this is useless.
I'm just saying that it's right now, it is being used for the cases that I've told you about because it's at the top of your search.
That's why people don't know how to use it.
When you do a Google search, like that's the first thing that comes up is the
very bad branding decision by
Google, right?
I do think it's a bad, because like, and Google is, I'm sorry, it's not as good as OpenAI or Anthropic.
It's not, it's not, sorry, Google.
Um, and like, it undermines Google's kind of lead in search.
And I think Google has handled this, a lot of things in this space very badly.
Although, ironically, I mean, you know, it was Google engineers who came up with Transformer paper and they, and they, um,
and they like, you know, still hire lots of great talent, but they've kind of become like this feeder system to like the hipper cooler ai companies i think um but like any i i think people are i think people are way too hipster about this like this is the most quickly adopted technology in the history of the world by some methods absolutely and i'm not like i actually like i think that ai has a lot of potential i want it to do better right like i like I want them to fix this shit, right?
Look, we are poker players, Maria.
We should be used to being able to accept information as shifting your prior or shifting your view, but like not being definitive, right?
And that's a journalist.
We're both journalists too.
Like, you know, when a source tells you something, you've, you vet it.
Absolutely.
But it actually adds more work for me because I have to go through it and try to figure out what can I rely on?
What can't I rely on?
Now, one listener who has shared his experience on
social media as well.
And I know you wrote, you kind of referenced it in your newsletter this week, Kevin Ruse, had emailed me about poker training and said that he was using ChatGPT and AIs to help him with poker.
And I said, don't do that because it's going to tell you the wrong thing.
And he did it.
He still did it.
And he said, oh, I want a tournament.
It was good.
It was helpful.
So I actually had ChatGPT do some poker training for me.
It's not good.
It gives you incorrect advice.
If you don't know, if you're a novice and if you're using this, you might get lucky, right?
And it all works out but let's go to poker like try to use chat gpt to teach you poker strategy it is not going to teach you strategy did you did you did you and the better you are the worse it is
uh yeah yeah i just
i did the same and i thought it was pretty good i mean it misses it misses things that like so i actually did this last night i mean it gets you know i
So what do you, what do you actually think are you doing?
But here's the thing, though.
Are you missing this?
Do you know this?
You're able to distinguish what it's missing and what is getting pretty well if you're using this as the tool to train yourself and you don't have any background knowledge that is the problem right you need it to be act you need it to teach you correctly it's much more difficult as someone who started poker from zero as an adult right um Let me tell you, like one of the most important things I learned was that it's much easier to teach someone from zero because I didn't have any bad habits, right?
If I had instead learned from Chat GPT
and those were kind of the habits and the thought processes that I acquired and some of them were just wrong or didn't teach me how to think correctly through things, I'd be a really shitty poker player.
Yeah.
So let me give you some examples of how I use ChatGPT, right?
You know, one is kind of as a research assistant, but like once you already know something about a topic, right?
Like I'm not getting a first brief, but I'm like querying it where I'm saying, okay, I talked to this person.
Here's a description of
how an AI thing works or a crypto thing works or a concept in finance works, right?
Will you vet this for me?
What critiques might you have, right?
It's maybe not quite as good as talking to like an expert, but I find like you often get a lot of value from that.
And again, it's not the last step in the process, right?
You can also use it for creative inspiration.
Give me 10 potential headlines from this.
You can use it to fill in missing words because it thinks in terms of a big matrix, right?
So, like, what's an analogy that I can think of?
What's this word or concept that I'm missing?
Invent a name for this thing or that thing.
You can use it to kind of squeeze quantitative data out of qualitative information.
Like I asked it, for example, to, and I didn't first draft in this again, right?
But to like vet my estimate of how liberal or conservative different eras in American history were on a negative 10 to positive 10 scale.
It can make ranking lists of different kinds.
I mean, it's just like, there are so many use cases.
for it.
And like, people just want to use to like cheat on papers or use it as a substitute for like Wikipedia or something, which are not the best use cases for it.
And if you ask ChatGPT, it will tell you that those are not the best use cases for it, right?
The queryable nature of it and the fact that it reorganizes this information in a way that, for many purposes, but not all purposes, is much more approachable and accessible is it's, I don't know, I think it's a miraculous technology.
I mean, you know, it has a lot of potential.
If you had woken up, if I had fallen to a coma what years ago, what years in 2015
and woken up.
I'm so
i missed the whole first trump administration i'm like oh trump's president oh he was already president surprise um
then like you would be fucking blown away by this shit right you'd be like oh my fucking god right just like passing the turing test i mean there are definitions debates about whether it's a good test whether it actually has passed it but like it basically is like human-esque intelligence
in some ways inferior, in some cases superior over a large domain of fields.
And just the way it can like
parse this very open-ended, fuzzy logic of text strings, I mean, I just think it's kind of an amazing,
it's amazingly robust in some ways, right?
Sure.
That you can misspell things.
I mean, it's amazingly robust in a way that like it's hard to think of other technologies that compare to it exactly.
And, you know, and solved using very simple underlying math, right?
I mean, the code for DeepSeek is something like a few hundred lines of code long.
My fucking election model is longer than that, right?
It's like, it's like, it's kind of a miracle.
Absolutely.
Absolutely.
I agree with all of that.
But now I think to push it a little bit further, obviously this miracle, as we know, is coming at a big cost, right?
Environmental cost.
It also P doom, right?
We've talked about that potential risk.
Is the miracle of, you know, giving you a good analogy worth it at this cost um that's i think those are kind of don't give me this environmental
i'm not right i'm not talking i'm talking about p-doom and environmental it's not crap nate we talked about this before like a lot of the energy okay look at this this is something that's supposed to scare me foreignpress.org the energy consumption for training chat gpt the leading model is even more staggering equating to that of an american household for more than 700 years so basically to train this leading model only took 700 households worth, like one subdivision of some fucking neighborhood in Tulsa, right?
It's not very much.
Like don't, you know, you undermine the argument for P doing we all die.
If like, I mean, this, you know, obviously if the models get hungrier and hungrier, but by the way, the deep seek thing should be good news for the environment.
Well, that's why I said we don't know, because on the one hand, maybe, depending on what the actual resources are.
On the other hand, if it means that every single person is now running these smaller things, or not every single person, but if it makes it more, more likely that more of these are, what's the net impact?
Right.
If it's one tenth, but you actually have a hundred times more people using it as a result, then it's a net.
obviously negative impact instead of net positive.
These are all open questions, right?
And like I said, I am not an AI skeptic.
I think it's really cool.
I think there are lots of really interesting things here.
I just think that there are other, you know, you can't also be like a rah-rah cheerleader.
None of this matters.
Of course, it does matter.
I think all of these things matter.
I just think people are over-indexing to like,
I don't know.
I mean, have you taken Waymo's?
A Waymo, Maria?
And by the way, if you're not aware, this is a self-driving car company, which is available in San Francisco, Phoenix, and maybe one or two other places, LA, I think.
Have you taken a Waymo in any of those places, Maria?
I have not.
It's fucking Blade Runner.
I'm telling you.
It's a very good experience.
It's a much smoother ride than I'd say 95%
of Ubers.
They have like Space Sage music, and you you feel like you're in the fucking future.
And I would almost guarantee you that driverless cars are going to be a very popular technology.
I don't know.
I don't know.
I watched that episode of Silicon Valley where he's in the driverless car and ends up on a
on a boat somewhere in the middle of the ocean.
So I'm a little obviously TV show comedy, but
you never know how the experience will end up.
But I'm going to San Francisco next week, Nate.
So, you know, maybe I'll take my first Waymo.
Take a Waymo.
It's like a 95th percentile
Uber driver.
All right.
Well, on that positive note, shall we talk a little bit more poker and switch to a listener question?
Okay, fine.
We'll be back right after this.
in today's super competitive business environment, the edge goes to those who push harder, move faster, and level up every tool in their arsenal.
T-Mobile knows all about that.
They're now the best network, according to the experts at OoCla Speed Test, and they're using that network to launch Supermobile, the first and only business plan.
to combine intelligent performance, built-in security, and seamless satellite coverage.
With Super
your performance, security, and coverage are supercharged.
With a network that adapts in real time, your business stays operating at peak capacity even in times of high demand.
With built-in security on the first nationwide 5G advanced network, you keep private data private for you, your team, your clients.
And with seamless coverage from the world's largest satellite-to-mobile constellation, your whole team can text and stay updated even when they're off the grid.
That's your business, supercharged.
Learn more at supermobile.com.
Seamless coverage with compatible devices in most outdoor areas in the U.S.
where you can see the sky.
Best network based on analysis by OOCLA of Speed Test Intelligence Data 1H 2025.
The wait is over.
The next up live music finals are here.
On September 26th, TikTok Live and iHeartRadio bring you the biggest night in live music discovery.
Streaming live from the legendary iHeartRadio Theater in LA.
The top 12 artists you've been following will take the spotlight for one final career-defining performance.
Judged by music gurus and industry powerhouses.
Tom Pullman, chief programming officer at iHeartMedia.
Beada Murphy, program director of 102.7 KissFL.
Justina Valentine from MTV's Wild and Out.
And viral guitarist John Dredo.
Hosted by iHeartRadio's JoJo Wright and EJ.
This is the ultimate showdown.
The judges will crown the next up live music winner and you have the power to decide who takes home the People's Choice Award.
Don't miss a second.
Follow along at TikTok Live underscore US.
And be there live, September 26, 7 to 9 p.m.
Pacific.
Together, let's witness the dawning of the next music superstar.
Only on TikTok Live.
As many of you know, I've spent a lot of time studying what really makes people happy.
What works, what doesn't, and why.
And here's the truth.
It's not about having the perfect home or perfectly plated food.
It's about connection.
One of my favorite ideas is something I call scruffy hospitality, inviting people over even if things aren't spotless or fancy.
Because science shows that just gathering, laughing, chatting, maybe even cooking together gives our well-being a real boost.
That's why I love what Bosch is doing.
Their quality refrigerators use VitaFresh technology to keep fruits and veggies fresher longer, so you always have something on hand to pull together a meal.
And when you cook with fresh ingredients, you're not just making a meal, you're showing people they matter.
Plus, meals made with real fresh foods actually promote more energized and joyful interactions.
Bosch appliances are designed to keep things running smoothly, so you can stress less and focus more on what really counts: the people you're with.
To learn more, visit BoschHomeUS.com.
All right.
So
we had a poker-related listener question that I think, Nate, you are
probably
more equipped to answer in the sense that you play cash home games and I don't.
By the way, I'm really sorry if you can hear some
knock knock noises in the background.
Apparently, the apartment above mine has just started construction.
It actually just happened as we started taping this podcast.
This is the first hammering I have heard, but of course you get to experience it alongside me because I love our listeners and I want to share all of my experiences with them.
So Nate, here's the listener question.
I have a neighborhood poker night with my friends.
Everyone plays really loose and passive, lots of calling, not much raising.
How do I win against real amateurs like that?
What are the most common and easy to detect tells by amateurs like this?
Part of this I can answer too, right?
Because this happens in tournaments as well.
But let's start with what you think since you play in home games.
You know, this is something that you find fun
and not an experience that I often have.
So it depends.
This comes from a listener, Hugh, H-U-G-H.
I guess that's the way it's usually spelled.
Hugh.
Hugh.
Hugh does sound British.
I'm sorry.
It's just a name that I automatically associate with being English.
Okay, so what are basics for loose home?
I mean, it depends on if you're talking about like really bad players or just he seems to be.
He seems to be.
So, you know, the basics are you actually don't want to play like everyone else, meaning, you know, don't be so loose and passive, right?
You need, particularly out of position, hands that can make the nuts, right?
So
suited hands, particularly, you know, ASAC suited, Broadway-suited hands, 10-9-suited, kind of an above, right?
So number one, hand selection becomes more
oriented toward strong hands that can make big, you know, straights and flushes and better.
That's part one, right?
Number two, you're going to want to like increase your bet sizing maybe quite a bit.
Theory says in a cash game that you're supposed to raise to maybe two and a half X a big blind.
Here you can go a four or five if there are already limpers and you can raise even more than that, right?
There are some games where the standard open might be to like 10x or things like that.
But like, let me, maybe let me even back up a little further, right?
I actually have dealt poker games to total rank amateurs for me, like literally five of the 10 people have never played poker before, right?
The two things that they most routinely get wrong are, number one,
they call too much, meaning they call and play it like a slot machine instead of folding or raising more.
And number two, they don't understand bet sizing.
What is the size of your bet relative to the size of the pot, right?
If you don't know anything about poker, know nothing at all, then just bet half the size of the pot.
Keep track of what's in the pot and bet half that size.
But in general, don't do so much calling, right?
If you have a good hand or a good bluff, or even if you just kind of think other people are scared, then
do some raising.
If you think they're beat, then most amateur players are not going to go nuts.
I mean, it's a little complicated because like they might not understand hand strikes.
So you might want to weigh absolute strength a little bit more, but like
but don't be afraid to get the money and when you have a good hand and when other people have a good uh represent a good hand then um
it gets a little bit complicated but then um you know
you don't have to do a whole excess amount of bluff catching i mean those are the basic and then i'd say like um
In general, in these loose cash games, people are very sticky.
Now we're talking about a slightly higher caliber of players.
People have had play before, right?
People are mostly very sticky pre-flop and on the flop, and then they will start to fold.
Cash can players do like to fold on turns and rivers sometimes, right?
So that means that like, you know, that can affect your whole strategy for the whole hand is that you tend not to have a lot of full equity pre-flop in on flops, and then it requires multiple barrels sometimes.
Yeah.
I think that
as someone who is a tournament player,
there is some advice that I think applies all around,
which basically goes hand in hand with what you said, Nate.
Number one, you don't want to follow the tendencies of the people who are making mistakes, right?
So if people are too loose, you actually want to tighten up.
If people are passive, you want to become more aggressive.
I think that's important, but you should also realize if they're going to be sticky, then you should just bet huge, right?
Like if you are going to, if you'd normally bet half pot, just bet pot.
They're going to call anyway, right?
Build massive pots with your good hands.
This will also enable you to bluff, right?
Because they will eventually fold.
Now, something you said, I think this is actually true, not just of cash games, but in general in tournaments as well, people do tend to
overcall flop and overfold turn.
So that's a great, you know, I think.
Building an over betting strategy into your game in a game like that is really good, right?
And sometimes they'll get very sticky.
Like I've had, because the second part of this question was tells, which I think is just bad.
Do not use tells, even though in games like that, people probably do have tells, especially if you're going to play with them over and over, it might be a little different, but I just don't think that's great to rely on.
But when I've relied on tells, I've actually made really big mistakes because I've had...
I've had situations where I'm like, oh, this person really likes their hand.
They must be really strong.
And I end up folding.
And they had like ace-deuce offsuit, but there was an ace on the board.
And they thought that it was just like the nuts, right?
Because they completely overvalued the fact that they had an ace.
And so they were playing it like they had the nuts and they thought they had the nuts, but they really didn't.
So if people are bad, don't use tells because the strength, their perceived strength of their hand may not actually be the actual strength of their hand.
I'm more, I'm more tell.
It's funny because we have like opposite personality.
You're like way more sound theoretically, and I'm kind of like
psychoanalyzing people a little bit more.
And
no, look, I think the premise are like
two categories of tells from very inexperienced players that
I don't think are always hard to distinguish, but require additional context, right?
What is a really bad actor
tell, right?
Where they just are like, they watch like poker movies where you're supposed to act quick if you're strong and strong if you're weak, and they just really overdo it in like a comical way, right?
Yes, I have seen that.
But also sometimes people are like extremely,
there's a lot of Hollywooding, actually.
And I've seen this in from Amateur, where like, if you have the nuts, right, like say you flopped quads, um, right, or something like that, and then they'll just be like
the size, and then like, I guess I call, right?
Like, that, if someone's doing that, like, holy shit, you're beat.
Like, just
there are, there are certain situations like that.
But I think, in general, it's it's better to, we haven't played in Hughes' game or Hughes' game.
So, I think that just sticking sticking to the advice that we've given,
which is, you know, don't be loose passive.
Basically, you have to tighten up your ranges.
You have to figure out what those ranges are.
And,
you know, the
other the other thing is, you know, your bet sizing is going to change because if people are going to be calling stations, great, exploit it.
If people are going to call pre-flop anyway, great, make your sizes bigger.
Just build pots when you have very strong hands.
I mean, the other thing, you know, to close the discussion on tells, I mean, people can also be very honest, right?
Like they don't, you know, in games where
the stakes are low relative to people's like net worth, which depends on people's net worth, right?
Then they just don't necessarily take a lot of action to like conceal
disappointment with a bad
flop or things like that, you know, a lot of times, oh, I got to catch my card.
Like that actually often is more often than not honest, more in cash games and in tournaments.
I don't know why, right?
Um, I think tournaments people like are just like
playing their A game
a bit more.
I'm at secretive cash, right?
People are playing their A game more often in tournaments, or at least their B game.
Yeah, no, I actually, I think, I think there's something too that people do tend to be more honest in cash games.
I had a hilarious situation at a higher stakes cash game where I had raised, and I don't remember if it was small or big blind, had defended.
Um, and anyway, went, you know, check, bet
on the flop,
then check and check on the turn and check.
And no, no, not check.
And I was like looking to see, like, I had nothing, what I wanted to bet on the river.
And he just folded.
He's like, you definitely have me beat because like I've got nothing.
And I had nothing, right?
Like, I don't think I definitely had a beat.
And he just folded to me, right?
I didn't even have to think about the sizing or whether I was going to bet or any of it.
That would never happen in a tournament, but it happens in cash games all the time.
And I still remember this hand.
I don't play cash very often.
So things like that stand out, but I've seen people do that.
And then they try to do it in tournaments, actually.
You can often spot a cash player in a tournament because they will sometimes fold out of turn.
They'll do things that are just like very honestly communicate that they have no more interest in this hand.
Don't be a super nit, right?
Like to show people overvalue, especially in cash games, the last thing they saw, right?
So like if you have like an occasional
hand where you get a little out of line, right?
And again, I think, you know, usually worth picking your spots carefully and there's some psychology to that.
And then like, if you show, oh, I three-bit 5-4-suited from
the button, which might actually be a perfectly fine near GTO 3-bit occasionally, right?
If you turn a straight with that, and the other guy folds, you definitely want to show that hand, right?
You want to like, you're maintaining your reputation because like a seat in a good cash game is a valuable thing and people absolutely will notice if you're being a knit don't be a knit yep have fun play toward the looser end of your gto range have your gto range may actually be pretty tight against fish you never fold yep good luck you good luck you
Let us know what you think of the show.
Reach out to us at riskybusiness at pushkin.fm.
Risky Business is hosted by me, Maria Kondakova.
And by me, Nate Silver.
The show is a co-production of Pushkin Industries and iHeartMedia.
This episode was produced by Isabel Carter.
Our associate producer is Gabriel Hunter Chang.
Our executive producer is Jacob Goldstein.
If you like the show, please rate and review us so other people can find us too.
And if you want to listen to an ad-free version, sign up for Pushkin Plus.
For where $6.99 a month, you get access to ad-free listening.
Thanks for tuning in.
This is Justin Richmond, host of Broken Record.
Starbucks pumpkin spice latte arrives at the end of every summer like a pick-me-up to save us from the dreary return from our summer breaks.
It reminds us that we're actually entering the best time of year, fall.
Fall is when music sounds the best.
Whether listening on a walk with headphones or in a car during your commute, something about the fall foliage makes music hit just a little closer to the bone.
And with the pumpkin spice latte now available at Starbucks, made with real pumpkin, you can elevate your listening and your taste all at the same time.
The Starbucks pumpkin spice latte.
Get it while it's hot or iced.
You've probably heard me say this.
Connection is one of the biggest keys to happiness.
And one of my favorite ways to build that?
Scruffy hospitality.
Inviting people over even when things aren't perfect.
Because just being together, laughing, chatting, cooking, makes you feel good.
That's why I love Bosch.
Bosch fridges with VitaFresh technology keep ingredients fresher longer so you're always ready to whip up a meal and share a special moment.
Fresh foods show you care and it shows the people you love that they matter.
Learn more, visit Bosch HomeUS.com.
Time for a sofa upgrade?
Visit washable sofas.com and discover Anibay, where designer style meets budget-friendly prices with sofas starting at $699.
Anibay brings you the ultimate in furniture innovation with a modular design that allows you to rearrange your space effortlessly.
Perfect for both small and large spaces, Anibay is the only machine-washable sofa inside and out.
Say goodbye to stains and messes with liquid and stain-resistant fabrics that make cleaning easy.
Liquid simply slides right off.
Designed for custom comfort, our high-resilience foam lets you choose between a sink-in feel or a supportive memory foam blend.
Plus, our pet-friendly stain-resistant fabrics ensure your sofa stays beautiful for years.
Don't compromise quality for price.
Visit washable washablefas.com to upgrade your living space today with no risk returns and a 30-day money-back guarantee.
Get up to 60% off plus free shipping and free returns.
Shop now at washable sofas.com.
Offers are subject to change and certain restrictions may apply.
This is an iHeart podcast.