JD Vance's AI Speech, Techno-Optimists vs Doomers, Tariffs, AI Court Cases with Naval Ravikant

1h 50m

(0:00) The Besties intro Naval Ravikant!

(9:07) Naval reflects on his thoughtful tweets and reputation

(14:17) Unique views on parenting

(23:20) Sacks joins to talk AI: JD Vance's speech in Paris, Techno-Optimists vs Doomers

(1:11:06) Tariffs and the US economic experiment

(1:21:15) Thomson Reuters wins first major AI copyright decision on behalf of rights holders

(1:35:35) Chamath's dinner with Bryan Johnson, sleep hacks

(1:45:09) Tulsi Gabbard, RFK Jr. confirmed

Follow Naval:

https://x.com/naval

Follow the besties:

https://x.com/chamath

https://x.com/Jason

https://x.com/DavidSacks

https://x.com/friedberg

Follow on X:

https://x.com/theallinpod

Follow on Instagram:

https://www.instagram.com/theallinpod

Follow on TikTok:

https://www.tiktok.com/@theallinpod

Follow on LinkedIn:

https://www.linkedin.com/company/allinpod

Intro Music Credit:

https://rb.gy/tppkzl

https://x.com/yung_spielburg

Intro Video Credit:

https://x.com/TheZachEffect

Referenced in the show:

https://x.com/naval/status/1002103360646823936

https://x.com/CollinRugg/status/1889349078657716680

https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence

https://www.cnn.com/2021/06/09/politics/kamala-harris-foreign-trip/index.html

https://www.cnbc.com/2025/02/11/anduril-to-take-over-microsofts-22-billion-us-army-headset-program.html

https://x.com/JDVance/status/1889640434793910659

https://www.youtube.com/watch?v=QCNYhuISzxg

https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit

https://admin.bakerlaw.com/wp-content/uploads/2023/11/ECF-1-Complaint.pdf

https://www.youtube.com/watch?v=7xTGNNLPyMI

https://polymarket.com/event/which-trump-picks-will-be-confirmed?tid=1739471077488

Press play and read along

Runtime: 1h 50m

Transcript

Speaker 1 Great job, Naval.

Speaker 2 You rocked it. Maybe I should have said this on air, but that was literally the most fun podcast I've ever recorded.

Speaker 1 Oh, that's on air. Cut that in.

Speaker 3 Yeah, put it in the show. Put it in the show.

Speaker 2 I had my theory on why you were number one, but now I have the realization.

Speaker 3 What's the actual reason? You know us for a long time.

Speaker 1 Yeah, what was your theory? What's the reality?

Speaker 2 My theory was that my problem with going on podcasts is usually the person I'm talking to is not that interesting.

Speaker 2 They're just asking the same questions and they're dialing it in and they're not that interested. It's not like we're having a peer-level actual conversation.

Speaker 2 So that's why I wanted to do Air Chat and Clubhouse and things like that, because you can actually have a conversation. I see.
Right. And what you guys have very uniquely is four people,

Speaker 2 you know, of whom at least three are intelligent. I'm kidding.

Speaker 1 How did you say that?

Speaker 2 Saxon here. How did you see it?

Speaker 1 Nobody. Sax isn't even hearing you say that to all that.
So cool.

Speaker 2 That's so fast.

Speaker 2 Right. Of whom at least three are intelligent and all of you get along and you can have an ongoing conversation.
That's a very high hit rate.

Speaker 2 Normally in a podcast, you only get one interesting person. And now you've got three, maybe four, right?

Speaker 1 Okay.

Speaker 2 So that to me was why.

Speaker 1 Who are you talking to?

Speaker 2 We don't know. She'll remain mysterious forever.
Of the four, right? The problem is if you get people together to talk, Two is a good conversation. Three, possibly, four is the max.

Speaker 2 That's why in a dinner table at a restaurant, four talk, right? You don't do five or six, because then it splits into multiple conversations. So you had four people who were capable of talking, right?

Speaker 2 That I thought was a secret, but there's another secret. The other secret is you guys are having fun.
You're talking over each other. You're making fun of each other.
You're actually having fun.

Speaker 2 So that's why I'm saying this is the most fun podcast I've ever been on.

Speaker 2 That's why you'll be successful.

Speaker 1 Welcome back anytime at all. Thank you.
Thank you. Welcome back.

Speaker 1 Yes, absolutely.

Speaker 2 David Fun, guys. Thank you.

Speaker 3 188 and three smart guys.

Speaker 2 Can't believe that.

Speaker 2 I can't even believe you'd say that about Sax. He's not even here to defend himself.

Speaker 2 Sorry, David.

Speaker 3 Brain Man, David Sachs.

Speaker 3 And instead, we open source it to the fans, and they've just gone crazy with it.

Speaker 2 Love you, Best.

Speaker 2 Queen of Kinawa. I'm going on.

Speaker 3 All right, everybody. Welcome back to the number one podcast in the world.

Speaker 3 We're really excited today.

Speaker 3 Back again.

Speaker 3 Your Sultan of Science, David Friberg. What do you got going on there, Friedberg? What's in the background?

Speaker 2 Everybody wants to talk about it. What's up today?

Speaker 4 I used to play a lot of a game called Sim Earth on my Macintosh LC way back in the day.

Speaker 2 That tracks. Yeah.

Speaker 3 That tracks. And of course, with us again, your chairman.

Speaker 5 What games did you play growing up, JCL?

Speaker 4 Actually, I'm kind of curious. Did you ever play VDS game?

Speaker 3 I say Andrea, Allison,

Speaker 2 Susan.

Speaker 5 I mean, it was like a...

Speaker 5 A lot of cute girls. I was out dating girls, Freeberg.

Speaker 3 Yeah. I was not on my my Apple tomb playing Civilization.

Speaker 4 Let me find one of those pictures.

Speaker 3 Whoa, whoa, don't get me in trouble, man. The

Speaker 3 80s were good to me in Brooklyn.

Speaker 5 Rejection, the video game.

Speaker 2 Yes.

Speaker 3 You have three lives. Rejected.
Rejected. It's a numbers game, Chamov.

Speaker 2 As you know, as you well know, it is a numbers game.

Speaker 4 Nick, go ahead.

Speaker 4 Pull up Rico Suave here.

Speaker 2 Oh, no. What is this one? Oh, instead of playing video.
Here I am. No, in the 80s.
That's fat Jay Cow.

Speaker 2 That's Fat Jay Cow.

Speaker 2 Nick,

Speaker 2 how he's playing. Yeah, here he is out sling.
How about your uncle with the thin Jay Cow photo? Pre-Azepic. You know what he was slanging in there? That was a snack.

Speaker 2 You want pre-azepic and post-Azepic, right? Correct. And weightlifting.

Speaker 2 Beef jerky.

Speaker 3 Okay, go find my Leonardo DiCaprio picture, please, and replace my fat Jay Cow picture with that.

Speaker 2 Thank you. Oh, God, I was fat.
Man, plus 40 pounds is a lot heavier than that. It's no joke.

Speaker 5 It's no joke.

Speaker 2 40 pounds is is a lot.

Speaker 3 Sam, there's so many great emo photos of me.

Speaker 5 I'm proud of you.

Speaker 2 Thank you.

Speaker 2 Thank you, my man.

Speaker 2 If you want a good photo.

Speaker 5 Can you get through the intros, please, so we can start? Come on, quick.

Speaker 3 How you doing, brother? How are you doing, Chairman Dictator?

Speaker 2 You good? You get good? I guess you're good.

Speaker 2 Oh, all right. All right.

Speaker 3 I'm really excited today.

Speaker 3 Today, for the first time on the all-in podcast, the iron fist of Angel List, the Zen-like mage of the early stage. He has such a way with words.
He's the socrate of nerds.

Speaker 3 Please welcome my guy, Namaste Naval. How you doing? The intros are back.

Speaker 2 That is the best intro I've ever gotten.

Speaker 2 I didn't think you could do that. That was amazing.
That's your superpower. Right there.
Lock it in. Venture capital.
Just do that.

Speaker 3 Absolutely. That's actually, you know what?

Speaker 2 Interestingly, number one podcast in the world, like someone said.

Speaker 3 I mean, that's what I'm manifesting. It's getting close.
We've been in the top 10. So, I mean, the weekends are good for all all in.

Speaker 2 This one will hit number one. This one will go viral.
I think it could.

Speaker 3 If you have some really great, pithy insights, we might go right to the top.

Speaker 2 You should have a new audience. I just got to do a Sieg Heil and it'll go viral.

Speaker 2 Oh, no.

Speaker 2 Oh, no.

Speaker 3 Are you going to send us your heart?

Speaker 2 My heart goes out to you.

Speaker 3 My heart, I end here at the heart. I don't send it out.

Speaker 2 I keep it right here.

Speaker 3 I put both hands on the heart and I hold it nice and steady.

Speaker 3 I hold it in. It's sending out to you, but just not not explicitly.
All right. For those of you who don't know, Naval was an entrepreneur.
He kicked a bit of ass. He got his ass kicked.

Speaker 3 And then he started venture hacks.

Speaker 3 And he started emailing folks and saying, you know, 20, 15, 20 years ago, maybe 15, here are some deals in Silicon Valley. And he went around.
He started writing 50K, 100K checks.

Speaker 3 He hit a bunch of home runs. And he turned Venture Hacks into Angel List.
And then he has invested in a ton of great startups uh maybe give us some of the greatest hits there develop

Speaker 2 yeah twitter uber notion bunch of others um postmates udemy a lot of unicorns bunch upcoming i don't know it's actually a lot of deals at this point but honestly i'm not necessarily proud of being an investor investor to me is a side job it's a hobby

Speaker 2 so i do startups

Speaker 5 how do you define yourself

Speaker 2 I don't. I mean, I guess these days I would say more like building things.
You know, every

Speaker 2 so-called career is an evolution, right? And all of you guys are independent and you kind of do what you're most interested in, right? That's the point of making money.

Speaker 2 So you can just do what you want. So these days, I'm really into building and crafting products.
So I built one recently called Air Chat. It kind of didn't work.

Speaker 2 I'm still proud of what I built and got to work with an incredible team. And now I'm building a new product.
And this time I'm going to hardware.

Speaker 2 And I'm just building something that I really want. I'm not going to do it.
And you fund it all yourself, Naval?

Speaker 2 Partially. I bring investors along.
Last time they got their money back. Previous times they've made money.
Next time, hopefully they'll make a lot of money. It's good to bring your friends along.

Speaker 5 I'll be honest, I love that you said, I love the product, but it didn't work. Not enough people say that.

Speaker 2 Yeah, no, I built a product that I loved, that I was proud of, but it didn't catch fire. And it was a social product, so it had to catch fire for it to work.
So I found the team great homes.

Speaker 2 They all got paid. The investors that I brought in got their money back.

Speaker 2 And I learned a ton, which I'm leveraging into the the new thing. But the new thing is much harder.
The new thing is hardware and software and

Speaker 5 what did you learn building in 2024 and 2025 that you didn't know maybe before then?

Speaker 2 The main thing was actually just the craft, the craft of pixel by pixel designing a software product and launching it.

Speaker 2 I guess the main thing I took away that was a learning was that I really enjoyed building products and that I wanted to build something even harder and something even more real.

Speaker 2 And I think like a lot of us, I'm inspired by Elon and all the incredible work he's done. So I don't want to build things that are easy.
I want to build things that are hard and interesting.

Speaker 2 And I want to take on more technical risk and less market risk. This is the classic VC learning, right? Which is you want to build something that if people

Speaker 2 get it, if you can deliver it, you know people will want it.

Speaker 2 And it's just hard to build as opposed to you build it and you don't know if they want it. So that's a learning.

Speaker 3 Air chat was a lot of fun for those of you who don't know. It was kind of of like a social media network where you could ask a question and then people could respond.

Speaker 3 And it was like an audio-based Twitter.

Speaker 2 Would you say that was the best way to describe it? Audio, Twitter, asynchronous AI transcripts and all kinds of AI to make it easier for you, translation.

Speaker 2 Really good way for kind of trying to make podcasting type conversations more accessible to everybody.

Speaker 2 Because honestly, one of the reasons I don't go on podcasts, I don't like being intermediated, so to speak, right?

Speaker 2 Where you sit there and and someone interviews you and then you go back and forth and you go through the same old things. I just want to talk to people.

Speaker 2 I want peer relationships, kind of like you guys have running here. Naval, what happened?

Speaker 5 When you went through that phase, there was a period where it just seemed like something had gone on in your life and you just knew the answers. You were just so grounded.

Speaker 5 It's not to say that you're not grounded now, but you're less active posting and writing. But there was this period where I think all of us were like, all right, what does Naval think?

Speaker 2 Oh, really? Oh, okay, that's news me.

Speaker 5 I would say it would be like the late teens, the early 20s.

Speaker 5 Jason, you can correct me if I'm getting the dates wrong, but it's in that moment where like these Navalisms and this sort of philosophy really started to, I think people had a tremendous respect for how you were thinking about things.

Speaker 5 I'm just curious, like, what, were you going through something in that moment or like, oh, yeah, yeah, yeah, yeah.

Speaker 2 That's okay. No, very insightful.
Yeah. 20, so I've been on Twitter since 2007 because I was an early investor, but I never really tweeted.
I didn't get featured. I had no audience.

Speaker 2 I was just doing the usual techie guy thing, talking to each other. And then I started Angelus in 2010.
The original thing about matching investors to startups didn't scale.

Speaker 2 It was just an email list that exploded at early on, but then just didn't scale. So we didn't have a business.
And I was trying to figure out the business.

Speaker 2 And at the same time, I got a letter from the Securities and Exchange Commission saying, oh, you're acting as an unliced broker dealer. And I'm like, what, I'm not making any money.

Speaker 2 I'm not, I'm just making intros. I'm not taking anything.
It's just a public service. But even then, they were coming after me.
So I wasn't, and I'd raised a bunch of money from investors.

Speaker 2 So I was in a very high stress period of my life. Now, looking back, it's almost comical that I was stressed over it.
But at the time, it all felt very real.

Speaker 2 The weight of everything was on my shoulders, expectations, people, money, regulators.

Speaker 2 And I eventually went to DC and got the law change to legalize what we do, which ironically enabled a whole bunch of other things like ICOs and incubator days and so on, demo days.

Speaker 2 But in that process, I was in a very high stress period of my life. And I just started tweeting whatever I was was going through, whatever realizations I was happening.

Speaker 2 It's only in stress that you sort of are forced to grow. And so whatever internal growth I was going through, I just started tweeting it, not thinking much of it.

Speaker 2 And it was a mix of, there are three things that I kind of always kind of are running through. One is I love science.
You know, I'm an amateur, love physics. Let's just leave it at that.
I love.

Speaker 2 reading a lot of philosophy and thinking deeply about it. And I like making money, right? Truth, loving money.
That's my joke on my Twitter bio. Those are the three things that I keep coming back to.

Speaker 2 And so I just started tweeting about all of them. And I think

Speaker 2 before that, the expectation was that someone like me should just be talking about money. Stay in your lane.
And people had been playing it very safe.

Speaker 2 And so I think the combination of the three sort of caught people's attentions because every person thinks about everything. We don't just stay in our lane in real life.

Speaker 2 We're dealing with our relationships. We're dealing with our relationship with the universe.

Speaker 2 We're dealing with what we know to be true and with science and how we make decisions and how we figure things out.

Speaker 2 And we're also dealing with the practical, everyday material things of how to deal with our spouses or girlfriends or wives or husbands and how to make money and how to deal with our children.

Speaker 2 So I'm just... Tweeting about everything.
I just got interested in everything. I'm tweeting about it.
And a lot of it, my best stuff was just notes to self. It's like, hey, don't forget this.

Speaker 5 How to get rich. Remember that one? How to get rich.

Speaker 3 That was like one of the first threads.

Speaker 2 And that was a super banger. It was viral.
You had a super banger. Yeah.
Yeah. Yeah.
I think that is still the most viral thread ever on Twitter. I like timeless things.
I like philosophy.

Speaker 2 I like things that are still apply in the future. I like compound interests, if you will, in ideas.
Obviously, recently, X has become so addictive that we're all checking it every day.

Speaker 2 And Elon's built the perfect for you. He's built TikTok for nerds.
And we're all on it. But normally I try to ignore the news.
Obviously, last year, things got real.

Speaker 2 We all had to pay a lot of attention to the news. But I just like to tweet timeless things.
I don't know. I mean, people pay attention.
Sometimes they like what I write.

Speaker 2 Sometimes they go non-linear on me. But yeah, the how to get rich sweet storm was a big one.

Speaker 5 Is it problematic when people now meet you?

Speaker 5 Because the hype versus the reality, there's like, it's discordant now because people, if they absorb this content, they expect to see some cozy dating floating in the air.

Speaker 2 You know what I mean? Yes. Yeah.
Like many of you have stopped drinking, but I used to like have the occasional glass of of wine.

Speaker 2 And there was a moment there where I went and met with an information reporter back when I used to meet with reporters. And she said, where are we going to meet?

Speaker 2 So I said, oh, let's meet at the wine merchant and we'll get a glass of wine. She's like, what you drink? Like, it was like a big deal for her.

Speaker 2 I'm so disappointed.

Speaker 2 I was like, I'm an entrepreneur. Most of them are alcoholics or in psychedelics or

Speaker 2 doing whatever it takes to manage.

Speaker 2 Yeah. When they say I'm on therapy, you know what that's code for yeah

Speaker 2 so yes it is highly discordant

Speaker 2 yeah i'm almost reminded of that uh line in the matrix where that agent is about to like shoot one of the matrix characters and say it's only human right so that's why i want to say to everybody like only human yeah yeah yeah you did recently a podcast with tim ferris on parenting this was out there i love this and i bought the book from this guy yeah

Speaker 3 just give a brief overview of this philosophy of parenting.

Speaker 5 Oh, I didn't listen to this after at this time.

Speaker 2 Tell us about this.

Speaker 2 You're going to love this. This spoke to me, but it was a little crazy.
Yeah. So I'm a big fan of David Deutsch.
David Deutsch, I think, is basically the smartest living human.

Speaker 2 He's a scientist who's very

Speaker 2 quantum computation. And he's written a couple of great books, but it's about the intersection of the greatest theories that we have today, the theories with the most reach.

Speaker 2 And those are epistemology, the theory of knowledge, evolution, quantum physics, and computation.

Speaker 3 This is the beginning of infinity, guy.

Speaker 2 That's the beginning of infinity.

Speaker 3 That you always reference.

Speaker 2 Correct. Yes.
The fabric of reality is another book. I've spent a fair bit of time with him, done some podcasts with him, hired and worked with people around him.

Speaker 2 And I'm just really impressed because it's like the framework that's made me smarter, I feel like. Because we're all fighting aging.

Speaker 2 Our brains are getting slower and we're always trying to have better ideas. So as you age, you should have wisdom.
That's your substitute for the raw horsepower of intelligence going down.

Speaker 2 And so scientific wisdom I take from David. Not take, but I learned from David.
And one of the things that he pioneered is called taking children seriously.

Speaker 2 And it's this idea that you should take your children seriously like adults. You should always give them the same freedom that you would give an adult.

Speaker 2 If you wouldn't speak that way with your spouse, if you wouldn't force your spouse to do something, don't force a child to do something.

Speaker 2 And it's only through the latent threat of physical violence, hey, I can control you, I can make you go to your room, I can take your dinner away or whatever, that you intimidate children.

Speaker 2 And it resonated with me because I grew up very, very free. My father wasn't around when I was young.
My mother didn't have the bandwidth to watch us all the time. She had other things to do.

Speaker 2 And so I kind of was making my own decisions from an extremely young age. From the age of five, nobody was telling me what to do.
And from the age of nine, I was telling everybody what to do.

Speaker 2 So I'm used to that. And I've been homeschooling my own kids.
So the philosophy resonated. And I found this guy, Aaron Stupel, on AirChat.
And he was an incredible expositor of the philosophy.

Speaker 2 He lives his life with it 99% as extreme as one can go. So, his kids can eat all the ice cream they want and all the Snickers bars they want.
They can play on the iPad all they want.

Speaker 2 They don't have to go to school if they don't feel like it. They dress how they want.
They don't have to do anything they don't want to do.

Speaker 2 Everything is a discussion, negotiation, explanation, just like you would with a roommate or an adult living in your house. And it's kind of insane and extreme.
But I live my own home life

Speaker 2 in that arc, in that direction. And I'm a very free person.
I don't have an office to go to. I try really not to maintain a calendar.
If I can't remember it, I don't want to do it.

Speaker 2 I don't send my kids to school. I really try not to coerce them.
And so

Speaker 2 obviously that's an extreme model, but I was.

Speaker 5 Sorry, sorry, sorry. Hold on a second.
So

Speaker 2 your kids,

Speaker 5 if they were like, I want Hagen-Das and it's 9 p.m., you're like, okay.

Speaker 2 Two nights ago, I did this. I ordered the Haagen-Das.
It wasn't Hagen-Das with a different brand, but I ordered it.

Speaker 5 I'm just going to go through a couple of examples.

Speaker 2 We do it actually ice cream at 9 p.m. and we all ate ice cream.

Speaker 5 Yeah, so they're like, dad, I want to be able to.

Speaker 2 And there are how many

Speaker 5 kids are like, I want to be on my iPad. I'm playing Fortnite.
Leave me alone. I'll go to sleep when I want.

Speaker 2 You're like, okay. My oldest probably plays iPad nine hours a day.

Speaker 5 Okay. So then

Speaker 5 your other kid pees in their pants because they're too lazy to walk to the bathroom.

Speaker 2 They don't do that because they don't like pee in their pants.

Speaker 5 No, no, I understand. I understand, but I'm just saying, like, there's a spectrum of all of these things, right? Yeah.
And your point of view is 100% of it is allowed and you have no judgment.

Speaker 2 No, no, that's not where I am.

Speaker 2 That's where Aaron is. My rules are a little different.
My rules are they got to do one hour of math or programming plus two hours of reading every single day.

Speaker 2 And the moment they've done that, they're free creatures and everything else is a negotiation. We have to persuade them.
It's a persuasion, I should say, not even a negotiation.

Speaker 2 And even the hour of math and two hours of reading, really, you get 15 to 30 minutes of math, maybe an hour if you're lucky. And you get half an hour to two hours of reading.
reading.

Speaker 5 And what do you think the long-term consequences of that are?

Speaker 5 And then also, what is the long-term consequences, let's say, on health if they're making decisions you know are just not good, like the ice cream thing at 9 p.m.

Speaker 5 How do you manage that in your mind?

Speaker 2 I think

Speaker 2 whatever age you're at, whatever part you're at in life, you're still always struggling with your own habits.

Speaker 2 I think all of us, for example, still eat food and feel guilty or want to eat something that we shouldn't be eating. And we're still always evolving our diets.
And kids are the same.

Speaker 2 So my oldest has already, he passed on the ice cream last time and he said, I want to eat healthier because finally I managed to get through to him and persuade him that he should be healthier.

Speaker 2 My younger kids will eat it, but they'll eat a limited amount. My middle kid will sometimes eat something.
Okay, so you're not.

Speaker 5 So if they say something, you'll enable it, but then you'll guide, you'll be like, hey, listen, like, this is not the choice I would make. I don't think, but if you want it, I do it.
Yeah.

Speaker 2 I'll try it, but you also have to be careful where you don't want to intimidate them and you don't want to be so overbearing that then they just view dad as like controller.

Speaker 5 I find this so fascinating. And so what do you think happens to these kids?

Speaker 5 Like, I'm sure you have a vision of what they'll be like when they're fully formed adults. Like what is that vision?

Speaker 2 I try not to. They're going to be who they're going to be.
This is kind of how I grew up. I kind of did what I wanted.

Speaker 2 I would rather they have agency than

Speaker 2 turn out exactly the way I want. Because agency is the hardest thing, right? Having control over your own life, making your own decisions.

Speaker 2 I want them to be happy. I have a very happy household.

Speaker 5 What is the Plato, what's Plato's goal? Eudaimonia, right?

Speaker 2 Eudaimonia. Yeah, the happy life, Aristotle.

Speaker 5 Like the fulfillment, this concept. Is that what you want for them?

Speaker 2 I don't really want anything for them. I just want them to be free and their best selves.

Speaker 2 I want them.

Speaker 2 Shamat was worrying about details.

Speaker 3 He's got like 17 kids now.

Speaker 2 I don't know if you know, but Shamat has got like a whole bunch of things.

Speaker 3 I love this interview because the guy made a really interesting point, which was they're going to have to make these decisions at some point. They're going to have to learn the pros and cons,

Speaker 3 the upside, the downside to all these things, eating, iPad.

Speaker 3 And the quicker you get them to have agency to make these decisions for themselves with knowledge to ask questions, the more secure they'll be. I found it a fascinating discussion.

Speaker 3 I like cause and effect, especially in teenagers.

Speaker 3 Now that I have a teenager, it's really good for them to learn, hey, you know, if you don't do your homework, you have a problem, and then you got to solve that problem.

Speaker 3 How are we going to solve that problem? So I like to present it as, what's your plan? Anytime they have a problem,

Speaker 3 eight-year-old kids, 15-year-old kids, I just say, What's your plan to solve this? And then I like to hear their plan. And let me know if you want to brainstorm it.

Speaker 3 But I thought it was a very interesting,

Speaker 3 super interesting discussion.

Speaker 2 I would say overall, my kids are very happy. The household is very happy.
Everybody gets along. Everybody loves each other.
Yeah.

Speaker 2 Some of them are way ahead of their peers. Nobody is behind in anything that matters.

Speaker 2 Nobody seems unhealthy in any obvious way. No one has aberrant eating habits.
I haven't even found

Speaker 2 really an aberrant behavior that's out of line. So it's all good.
Self-correcting.

Speaker 3 It's like a

Speaker 4 I worry a lot about this like iPad situation. I see my kids on an iPad and it's almost like, unless they're doing an interactive project, if they end up watching.

Speaker 2 Says the guy who has a video game theme in the background.

Speaker 2 and who probably grew up playing video games non-stop and probably spends nine hours a day on his screen just called a phone. So yeah, it's the same thing, man.

Speaker 2 Well, I mean, I feel like watch is it, but do they watch shows?

Speaker 2 No, no, that's a hypocrite. There's a hypocrisy to picking up your phone and then saying to your kid, no, you can't use your iPad.

Speaker 2 I grew up playing video games non-stop and video games when I was older. And I was an avid gamer until just a few years ago.
Well, no, I mean,

Speaker 4 I'm not criticizing the iPad. I was obviously on a computer since I was four years old.
So I totally get it.

Speaker 4 And I think the question for me is like, but I didn't have the ability to play a 30-minute show and then play the next 30-minute show and the next 30-minute show and then sit there for two hours and just have a show playing the whole time.

Speaker 4 I was, you know, interacting on the computer and doing stuff and building stuff, which was a little different for me from a use case perspective.

Speaker 2 We did used to control their YouTube access, although now we don't do that. The only thing I ask them is that they put on captions when they're watching YouTube so that it helps their reading.

Speaker 2 They learn to read. That's a good tip.
Yeah, I like that one. I will say that one of my kids is really into YouTube.
The other two are not. Like, they just got over it.

Speaker 2 And to the extent that they use YouTube, it's mostly because they're looking up videos on their favorite games. They want to know how to be better at a game.

Speaker 3 All right. Let's keep moving through this docket.
We have David Sachs with us here. So, David, give us your philosophy of parenting.
Okay, next item on the docket.

Speaker 2 Let's go.

Speaker 6 Talk about some real issues.

Speaker 6 Saxony.

Speaker 3 Parenting show.

Speaker 2 A parenting show. Yeah.
I asked David, what's your parenting philosophy?

Speaker 3 He said, oh, well, I set up their trust four years ago. So he's done.
He's good.

Speaker 2 Trust is set up. Everything's good.
It's parents.

Speaker 5 G-R-A-T.

Speaker 2 Chick.

Speaker 5 You're all set, guys.

Speaker 3 Let me know how it works out.

Speaker 2 All right.

Speaker 3 Speaking of working out, we've got a vice president who isn't cuckoo for Cocoa Puffs and who actually understands what AI is. J.D.
Vance gave a great speech. I watched it myself.

Speaker 3 He talked about AI in Paris. This was on Tuesday at the AI Action Summit, whatever that is.
And he gave a 15-minute banger of a speech.

Speaker 3 He talked about over-regulating AI and America's intention to dominate this. And we happen to have with us, Naval, the czar.
The czar of AI.

Speaker 3 So before I go into all the details about the speech, I don't want to steal your thunder.

Speaker 3 this speech had a lot of verbiage, a lot of ideas that I've heard before that maybe we've all talked about. Maybe tell us a little bit about how this all came together and how proud you are.

Speaker 3 I mean, gosh, having a vice president who understands AI is just,

Speaker 3 it's mind-blowing. He could speak on a topic that's topical credibly.

Speaker 2 This was an awesome moment for America, I think.

Speaker 6 What are you implying there, JK?

Speaker 3 I'm implying you might have workshopped it with him.

Speaker 2 No. Or that he's smart.

Speaker 6 Both of those things. The vice president wrote the speech, or at least directed all of it.
So the ideas came from him. I'm not going to take any credit whatsoever for this.

Speaker 3 Okay. Well, it was on point.
Maybe you could talk about it.

Speaker 6 I agree it was on point. I think it was a very well-crafted and well-delivered speech.

Speaker 3 He made four main points about the Trump administration's approach to AI. He's going to ensure, this is point one, that American AI continues to be the gold standard.
Fantastic check.

Speaker 3 Two, he says that the administration understands that excessive regulation could kill AI just as it's taking off.

Speaker 3 And he did this in front of all the EU elites who love regulation, did it on their home court.

Speaker 3 And then he said, number three, AI must remain free from ideological bias, as we've talked about here on this program.

Speaker 3 And then number four, the White House, he said, will, quote, maintain a pro-worker growth path for AI so that it can be a potent tool for job creation in the U.S.

Speaker 3 So what are your thoughts on the four major bullet points in his speech here in Paris?

Speaker 6 Well, I think that the vice president, you knew he was going to deliver an important speech as soon as he got up there and said that I'm here to talk about not AI safety, but AI opportunity.

Speaker 6 And to understand what a bracing statement that was and really almost like a shot across the bow, you have to understand the history and context of these events.

Speaker 6 For the last couple of years, the last couple of these events have been exclusively focused on AI AI safety.

Speaker 6 The last in-person event was in the UK at Lutchley Park, and the whole conference was devoted to AI safety.

Speaker 6 Similarly, the European AI regulation obviously is completely preoccupied with safety and trying to regulate away safety risks before they happen.

Speaker 6 Similarly, you had the Biden EO, which was based around safety, and then you have just the whole media coverage around AI, which is preoccupied with all the risks from AI.

Speaker 6 So to have the vice president get up there and say right off the bat that there are other things to talk about in respect to AI besides safety risks, that actually there are huge opportunities there, was a breath of fresh air.

Speaker 6 And like I said, kind of a shot across the bow. And yeah, you could almost see some of the Eurocrats,

Speaker 6 they needed their fainting couches after that.

Speaker 3 Eurocrats.

Speaker 6 Trudeau looks like his dog just died. So I think that was just a really important statement right off the bat to set the context for the speech, which is AI is a huge opportunity for all of us.

Speaker 6 Because really that point just has not been made enough.

Speaker 6 And it's true, there are risks, but when you look at the media coverage and when you look at the dialogue that the regulators have had around this, they never talk about the opportunities.

Speaker 6 It's always just around the risks. So I think that was a very important corrective.
And then, like you said, he went on to say that the United States has to win this AI race.

Speaker 6 We want to be the gold standard. We want to dominate.

Speaker 3 That was my favorite part.

Speaker 6 Yeah. And by the way, that language about dominating AI and winning the global race, that is in President Trump's executive order from week one.

Speaker 6 So this is very much elaborating on the official policy of this administration. And the vice president then went on to say that he specified how we would do that, right?

Speaker 6 We have to win some of these key building block technologies. We want to win in chips.
We want to win in AI models. We want to win in applications.
He said we need to build.

Speaker 6 We need to unlock energy for these companies. And then most of all, we just need to be supportive towards them as opposed to regulating them to death.

Speaker 6 And he had a lot to say about the risk of over-regulation, how often it's big companies that want regulation. He warned about regulatory capture, which our friend Bill Gurley would like.

Speaker 6 And he said that, so basically having less regulation can actually be more fair. It can create a more level playing field for small companies as well as big companies.

Speaker 6 And then he said to the Europeans that we want you to be partners with us.

Speaker 6 We want to lead the world, but we want you to be our partners and benefit from this technology that we're going to take the lead in creating. But you also have to be a good partner to us.

Speaker 6 And then he specifically called out the over-regulation that Europeans have been engaged in. He mentioned the Digital Services Act, which has acted as like a speed trap for American companies.

Speaker 6 It's American companies who've been over-regulated and fined by these European regulations because the truth of the matter is that it's American technology companies that are winning the race.

Speaker 6 And so when Europe passes these onerous regulations, they fall most of all on American companies.

Speaker 6 And he's basically saying, we need you to rebalance and correct this because it's not fair and it's not smart policy and it's not going to help us collectively win this AI race.

Speaker 6 And that kind of brings me just to the last point is I don't think he mentioned China by name, but clearly he talked about.

Speaker 6 adversarial countries who are using AI to control their populations, to engage in censorship and thought control.

Speaker 6 And he basically painted a picture where it's like, yeah, you could go work with them or you could work with us. And we have hundreds of years of shared history together.

Speaker 6 We believe in things like free speech, hopefully. And we want you to work with us.

Speaker 6 But if you are going to work with us, then you have to cooperate and we have to create a reasonable regulatory regime.

Speaker 3 Naval, did you see the speech and your thoughts just generally on JD Vance and having somebody like this, you know, representing us and wanting to win American acceptance.

Speaker 2 I thought he was very surprising, very impressive. I thought he was polite, optimistic, and just very forward-looking.
It's what you would expect an entrepreneur or a smart investor to say.

Speaker 2 So I was very impressed. I think the idea that America should win, great.
I think that we should not regulate. I also agree with it.
I'm not an AI doomer. I don't think AI is going to end the world.

Speaker 2 That's a separate conversation. But there's this religion that comes along in many faces, which is that, oh, climate change is going to end the world.
AI is going to end the world.

Speaker 2 Asteroid is going to end the world. COVID-19 is going to end the world.
And it just has a way of fixating your attention, right? It captures everybody's attention at once.

Speaker 2 So it's a very seductive thing.

Speaker 2 And I think in the case of AI, it's really been overplayed by incentive bias, you know, motivated reasoning by the companies who are ahead and they want to pull up the ladder behind them.

Speaker 2 I think they genuinely believe it. I think they genuinely believe that there's safety risk, but I think they're motivated to believe in those safety risks and then they pass that along.

Speaker 2 But it's kind of a weird position because they have to say, oh, it's so dangerous that you shouldn't just let open source go at it and you should let just a few of us work with you on it but it's not so dangerous that a private company can't own the whole thing right because it was truly the manhattan project if they were building nuclear weapons you wouldn't want one company to own that sam altman's famously said that ai will capture the light cone of all future value in other words like all value ever created at the speed of light from here will be captured by ai

Speaker 2 So if that's true, then I think open source AI really matters and little tech AI really matters. The problem is that the nature of training these models is highly centralized.

Speaker 2 They benefit from supercomputer clustered compute. So it's not clear how any decentralized model can compete.

Speaker 2 So to me, the real issue boils down to is how do you push AI forward while not having just a very small number of players control the entire thing.

Speaker 2 And we thought we had that solution with the original OpenAI, which was a non-profit and was supposed to do it for humanity.

Speaker 2 But now because of they want to incentivize the team and they want to raise money, they have to privatize at least a part of it.

Speaker 2 Although it's not clear to me why they need to privatize the whole thing, like why do you need to buy out the non-profit portion?

Speaker 2 You could leave a non-profit portion and you could have the private portion for the incentives. But I think that the real challenge is how do you keep AI from naturally centralizing?

Speaker 2 Because all the economics and the technology underneath are centralizing in nature. If you really think you're going to create God, do you want to put God in the leash?

Speaker 2 with one entity controlling God. That to me is the real fear.

Speaker 2 I'm not scared of AI. I'm scared of what a very small number of people who control AI do to the rest of us for our own good, because that's how it always works.

Speaker 3 So well said, probably should go with the Greek model, having many gods and heroes as well. Freeberg, you heard the J.D.
Vance speech, I assume.

Speaker 3 What are your thoughts on over-regulation and maybe to Naval's point, one person owning this versus open source?

Speaker 4 I think that there's this kind of big definition of social balance right now on what I would call techno-optimism and techno-pessimism. Generally, people sort of fall into one of those two camps.

Speaker 4 Generally speaking, techno-optimists, I would say, are folks that believe that accelerating outcomes with AI, with automation, with bioengineering, manufacturing, semiconductors, quantum computing, nuclear energy, et cetera, will usher in this era of abundance.

Speaker 4 By making By creating leverage, which is what technology gives us,

Speaker 4 technology will make things cheaper and it will be deflationary and it will give everyone more. So it creates abundance.

Speaker 4 The challenge is that people who already have a lot worry more about the exposure to the downside than they desire the upside.

Speaker 4 And so, you know, the techno-pessimists are generally like the EU and large parts, frankly, of the United States. are worried about the loss of X, the loss of jobs, the loss of this, the loss of that.

Speaker 4 Whereas countries like China and India are more excited about the opportunity to create wealth, the opportunity to create leverage, the opportunity to create abundance for their people.

Speaker 4 You know, GDP per capita in the EU is $60,000 a year. GDP per capita in the United States, like 82,000.
But GDP per capita in India is 2,500 and China is 12,600.

Speaker 4 There's a greater incentive in those countries to manifest upside. than there is for the United States and the EU, who are more worried about manifesting downside.

Speaker 4 And so it is a very difficult kind of social battle that's underway.

Speaker 4 I do think, like, over time, those governments and those countries and those social systems that embrace these technologies are going to become more capitalist.

Speaker 4 And they're going to require less government control and intervention in job creation, the economy, payments to people, and so on.

Speaker 4 And the countries that are more techno-pessimistic are unfortunately going to find themselves asking for greater government control, government intervention in markets, governments creating jobs, government making payments to people, governments effectively running the economy.

Speaker 4 My personal view, obviously, is that I'm a very strong advocate for technology acceleration, because I think in nearly every case in human history, when a new technology has emerged, we've largely found ourselves assuming that the technology works in the framework of today or of yesteryear.

Speaker 4 The automobile came along. And no one envisioned that everyone in the United States would own an automobile.

Speaker 4 And therefore, you would need to create all of these new industries like mechanics and car dealerships, roads, all the people servicing and building roads and all the other industry that emerged.

Speaker 2 Motowns.

Speaker 4 And it's very hard for us to sit here today and say, okay, AI is going to destroy jobs. What's it going to create? And be right.

Speaker 4 I think we're very likely going to be wrong, whatever estimations we give.

Speaker 4 The area that I think is most underestimated is large technical projects that seem technically infeasible today that AI can unlock. For example, habitation in the oceans.

Speaker 4 Like it's very difficult for us to envision like creating cities underwater and creating cities in the oceans or creating cities on the moon or creating cities on Mars or finding new places to live.

Speaker 4 Those are like technically, people might argue, oh, that sounds stupid. I don't want to go do that.
But at the end of the day, like human civilization will drive us to want to do that.

Speaker 4 But those technically are very hard to pull off today. But AI can unlock a new set of industries to enable those transitions.

Speaker 4 So I think we really get it wrong when we try and assume the technology as a transplant for last year or last century.

Speaker 4 and then we kind of become techno-pessimists because we're worried about losing what we have.

Speaker 3 Are you a techno-pessimist? Are you optimist?

Speaker 3 Because you bring up the downside an awful lot here on the program, but you are working every day in a very optimistic way to breed, you know, better strawberries and potatoes for folks.

Speaker 2 So you're a little bit of a...

Speaker 4 No, I have no techno-pessimism whatsoever. I try and point out why the other side is acting the way they are.

Speaker 2 Got it. Okay.

Speaker 3 Putting it in full context.

Speaker 4 And what I'm trying to highlight is I think that that framework is wrong.

Speaker 4 I think that that framework of trying to transplant new new technology to the old way of things operating is the wrong way to think about it.

Speaker 4 And it creates this, you know, because of this manifestation about worrying about downside, it creates this fear that creates regulation like we see in the EU.

Speaker 4 And as a result, China's GDP will scale while the EUs will stagnate if that's where they go.

Speaker 2 That's my assessment or my opinion on what will happen.

Speaker 3 Jamaf, you want to wrap this up for us? What are your thoughts on JD?

Speaker 5 I'll give you two. Okay.
The first is I would say this is a really interesting moment where I would call this the tale of two vice presidents.

Speaker 5 Very early in the Biden administration, Kamala was dispatched on an equally important topic at that time, which was illegal immigration, and she went to Mexico and Guatemala.

Speaker 5 And so you actually have a really interesting A-B test here. You have both vice presidents dealing with what were in that moment incredibly important issues.

Speaker 5 And I think that JD was focused, he was precise, he was ambitious.

Speaker 5 And even the

Speaker 5 part of the press that was very supportive of Kamala couldn't find a lot of very positive things to say about her. And the feedback was, it was meandering.
She was ducking questions.

Speaker 5 She didn't answer the questions that she was asked very well.

Speaker 5 And it's so interesting because it's a bit of a microcosm then to what happened over these next four years and her campaign, quite honestly, which you could have taken that window of that feedback.

Speaker 5 And unfortunately for for her, it just continued to be very consistent.

Speaker 5 So that was one observation I had because I heard him give the speech, I heard her, and I had this kind of moment where I was like, wow, two totally different people.

Speaker 5 The second is on the substance of what JD said. I said this on Tucker, and I'll just simplify all of this into a very basic framework, which is

Speaker 5 if you want a country to thrive, it needs to have economic supremacy and it needs to have military supremacy. In the absence of those two things, societies crumble.

Speaker 5 And the only thing that underpins those two things is technological supremacy. And we see this today.
So on Thursday, what happened with Microsoft?

Speaker 5 They had a $24 billion contract with the United States Army to deliver some whiz-bang thing.

Speaker 5 And they realized that they couldn't deliver it. And so what did they do? They went to Andoril.
Now, why did they go to Andrew? Because Andoril has the technological supremacy to actually execute.

Speaker 5 A few weeks ago, we saw some attempts at technological supremacy from the Chinese with respect to DeepSeek. So I think that this is a very simple existential battle.
Those who can harness and govern

Speaker 5 the things that are technologically superior will win, and it will drive economic vibrancy. and military supremacy, which then creates safe, strong societies.
That's it.

Speaker 5 So from that perspective, JD nailed it. He saw the forest from the trees.
He said exactly what I think needed to be said and put folks on notice that you're either on the ship or you're off the ship.

Speaker 5 And I think that that was really good.

Speaker 3 Yeah. And

Speaker 3 there was like a little secondary conversation that emerged, Sachs, that I would love to engage you with if you're willing,

Speaker 3 which is

Speaker 3 this civil war, quote unquote, between maybe MAGA 1.0, MAGA 2.0, techies in the MAGA party like ourselves, and maybe

Speaker 3 the core MAGA folks. We can pull up the tweet here in J.D's own word, and he's been engaging people in his own words.

Speaker 3 It's very clear that he's writing these tweets, a distinct difference between other politicians and this administration. They just tell you what they think.
Here it is.

Speaker 3 I'll try and write something to address this in detail, says J.D. Vance's tweet.
But I think this civil war is overstated.

Speaker 3 Though yes, there are some real divergences between the populace, I would describe that as MAGA, and the techies, but briefly, in general, I dislike substituting American labor for cheap labor.

Speaker 3 My views on immigration and offshoring flow from this, I like growth and productivity gains, and this informs my view on tech and regulation.

Speaker 3 When it comes to AI specifically, the risks are, number one, overstated, to your point, Naval, or two, difficult to avoid. One of my many real concerns, for instance, is about consumer fraud.

Speaker 3 That's a valid reason to worry about safety. But the other problem is much worse if a peer nation is six months ahead of the U.S.
on AI. Again, I'll try and say more.

Speaker 3 And this is JD going right at, I think, one of the more controversial topics, Sachs,

Speaker 3 that the administration is dealing with and has dealt with when it comes to immigration. and tech because these two things are dovetailing each other.

Speaker 3 If we lose millions of driver jobs, which we will in the next 10 years, just like we lost millions of cashier jobs.

Speaker 3 Well, that's going to impact how our nation and many of the voters look at the border and immigration.

Speaker 3 We might not be able to let as many people immigrate here if we're losing millions of jobs to AI and self-driving cars. What are your thoughts on him engaging this directly, Sax?

Speaker 6 Well, the first point he's making there is about wage pressure, right?

Speaker 6 Which is when you throw open our borders or you throw open American markets to products that can be made in foreign countries by much cheaper labor that's not held to the same standards, the same minimum wage or the same union rules or the same safety standards that American labor is and has a huge cost advantage, then you're creating wage pressure for American workers.

Speaker 6 And he's opposed to that.

Speaker 6 And I think that is an important point because I think the way that the media or neoliberals like to portray this argument is that somehow MAGA's resistance to unlimited immigration is somehow based on xenophobia or something like that.

Speaker 6 No, it's based on bread and butter kitchen table issues, which is if you have this ridiculous open border policy, it's inevitably going to create a lot of wage pressure for people at the bottom of the pyramid.

Speaker 6 So I think JD is making that argument. But in this is point two, he's saying, I'm not against productivity growth.

Speaker 6 So technology is good because it enables all of our workers to improve their productivity. And that should result in

Speaker 6 better wages because workers can produce more. The value of their labor goes up if they have more tools tools to be productive.
So there's no contradiction there.

Speaker 6 And I think he's explaining why there isn't a contradiction.

Speaker 6 A point I would add, he doesn't make this point in that tweet, but I would add is that one of the problems that we've had over the last, I don't know, 30 years is that we have had tremendous productivity growth in the U.S., but labor has not been able to capture it.

Speaker 6 All that benefit has basically gone to capital or to companies. And I think a big part of the reason why is because we've had this largely unrestricted immigration policy.

Speaker 6 So I think if you were to tamp down on immigration, if you were to stop the illegal immigration, then labor might be able to capture more of the benefits of productivity growth.

Speaker 6 And that would be a good thing. It'd be a more equitable distribution of the gains from productivity and from technology.
And that, I think, would help

Speaker 6 tamp down this growing conflict that you see between

Speaker 6 technologists and the rest of the country, or certainly the heartland of the country.

Speaker 3 Naval, this is a, okay, you want to add anything else, Dad? Sorry.

Speaker 6 Well, I think just the final point he makes in that tweet is that he talks about how we live in a world in which there are other countries that are competitive.

Speaker 6 And specifically, he doesn't mention China, but he says we have a peer competitor.

Speaker 6 And it's going to be a much worse world if they end up being six months ahead of us on AI rather than six months behind. That is a really important point to keep in mind.

Speaker 6 I think that the whole Paris AI summit took place against the backdrop of this recognition because just a few weeks ago, we had DeepSeek. And it's really clear that China is not a year behind us.

Speaker 6 They're hot on our heels or only maybe months behind us.

Speaker 6 And so if we hobble ourselves with unnecessary regulations, if we make it more difficult for our AI companies to compete, that doesn't mean that China is going to follow suit and copy us.

Speaker 6 They're going to take advantage of that fact and they're going to win.

Speaker 3 All right, Naval, this seems to be one of the main issues of our time. Four of the five people on this podcast right now are immigrants.
So we have this amazing tradition in America.

Speaker 3 This is a country built by immigrants for immigrants. Do you think that should change now in the face of job destruction, which I know you've been tracking self-driving pretty acutely?

Speaker 3 We both have an interest there, I think, over the years.

Speaker 3 You know,

Speaker 3 what's the solution here if we're going to see a bunch of job displacement, which will happen for certain jobs.

Speaker 2 We all kind of know that.

Speaker 3 Should we shut the border and not let the next Naval, Chamap, Sachs, and Freedberg into the country?

Speaker 2 Well, let me declare my biases up front. I'm a first-generation immigrant.
I moved here when I was nine years old. Rather, my parents did, and then I'm a naturalized citizen.

Speaker 2 So obviously, I'm in favor of some level of immigration. That said, I'm assimilated.
I consider myself an American first and foremost. I bleed red, white, and blue.

Speaker 2 I believe in the Bill of Rights and the Constitution. First and second and fourth and all the proper amendments.

Speaker 2 I get up there every July 4th and I deliberately defend the Second Amendment on Twitter, at which point half my followers go bananas,

Speaker 2 you know, because they're not supposed to. I'm supposed to be a good immigrant, right? And carry the usual set of coherent leftist policies, globalist policies.

Speaker 2 So I think that legal high-skill immigration with room and time for assimilation makes sense.

Speaker 2 You want to have a brain drain on the best and brightest coming to the freest country in the world to build technology and to help civilization move forward.

Speaker 2 And, you know, as Chamath was saying, economic power and military power is downstream of technology. In fact, even culture is downstream of technology.

Speaker 2 Look at what the birth control pill did, for example, to culture, or what the automobile did to culture, or what radio and television did to culture, and then the internet.

Speaker 2 So technology drives everything. And if you look at wealth, wealth is a set of physical transformations that you can affect.
And that's a combination of capital and knowledge.

Speaker 2 And the bigger input to that is knowledge. And so the U.S.
has become the home of knowledge creation thanks to bringing in the best and brightest.

Speaker 2 You could even argue DeepSeek, part of the reason why we lost that is because a bunch of those kids, they studied in the US, but then we sent them back home. So I think you absolutely have to.

Speaker 3 Is that actually accurate? They were

Speaker 2 a few of them. Really?

Speaker 3 Oh, my God. That's like exhibit A.

Speaker 2 Wow.

Speaker 2 So I think you absolutely have to split skilled. assimilated immigration, which is a small set.
And it has to be both. They have to both be skilled and they have to become Americans.

Speaker 2 That oath is not meaningless, right? It has to mean something. So skilled, assimilated immigration.
You have to separate that from just open borders, whoever can wander in, just come on in.

Speaker 2 That latter part makes no sense.

Speaker 6 If the Biden administration had only been letting in people with 150 IQs, we wouldn't have this debate right now.

Speaker 2 Absolutely.

Speaker 6 The reason why we're having this debate is because

Speaker 6 they just opened the border and let millions and millions of people in.

Speaker 2 It was to their advantage to conflate legal and illegal immigration. So every time you'd be like, well, we can't just open the borders, you'd say, well, what about Elon? What about this?

Speaker 2 And they would just parade parade.

Speaker 6 If they were just letting out of the Elons and the Jensons and Freedbergs, we wouldn't be having the same conversation today.

Speaker 5 The correlation between open borders and wage suppression is irrefutable. We know that data.
And I think that the Democrats,

Speaker 5 for whatever logic, committed

Speaker 5 an incredible error in basically undermining their core cohort. I want to go back to what you said because I think it's super important.
There is a new political calculus on the field.

Speaker 5 And I agree with you. I think that the three cohorts of the future are the asset-light working and middle class.
That's cohort number one. There are

Speaker 5 probably 100 to 150 million of those folks. Then there are patriotic business owners.
And then there's leaders in innovation. Those are the three.

Speaker 5 And I think that what MAGA gets right right is they found the middle ground that intersects those three cohorts of people.

Speaker 5 And so every time you see this sort of left versus right dichotomy, it's totally miscast. And it sounds discordant to so many of us because that's not how any of us identify, right?

Speaker 5 And I think that that's a very important observation because the policies that we adapt will need to reflect those three cohorts. What is the common ground amongst those three?

Speaker 5 And on that point, Naval is right.

Speaker 5 There's not a lot that those three would say is wrong with a very targeted form of extremely useful legal immigration of very, very, very smart people who agree to assimilate and be a part of America.

Speaker 5 I mean, I'm so glad you said it the way you said it. Like, I remember growing up where my parents would try to pretend that they were in Sri Lanka.
And sometimes I would get so frustrated.

Speaker 5 I'm like, if you want to be in Sri Lanka, go back to Sri Lanka.

Speaker 5 I want to be Canadian because it was easier for me to make friends. It was easier for me to have a life.
I was trying my best. I wanted to be Canadian.

Speaker 5 And then when I moved to the United States 25 years ago, I wanted to be American. And I feel that I'm American now and I'm proud to be an American.
And I think that's what you want.

Speaker 5 You want people that embrace that. It doesn't mean that we can't dress up in a show war camee every now and then.
But the point is, like, what do you believe?

Speaker 2 And where is your loyalty?

Speaker 3 Freeberg, we used to have this concept of a melting pot of this assimilation. And that was a good thing.

Speaker 3 Then it became cultural appropriation we we kind of made a right turn here where do you stand on this recruiting the best and brightest and forcing them to assimilate making sure that they're down jason

Speaker 2 find the people that are there to be here yeah let me restate that uh i reject the premise this whole conversation

Speaker 6 wait wait hold on look i'm look i'm a first generation american who moved here when I was five and became a citizen when I was 10. And yes, I'm fully American.

Speaker 6 And that's the only country I have any loyalty to.

Speaker 6 But the premise that I reject here is that somehow an AI conversation leads to an immigration conversation because millions of jobs are going to be lost. We don't know that.

Speaker 5 That's also true.

Speaker 6 I agree. You're making a huge assumption.
I completely buying into the doomerism that AI is going to wipe out millions of jobs. That is not

Speaker 2 evidence. I think that's a good question.
And furthermore,

Speaker 6 have any jobs been lost by AI? Let's be real. We've had AI for two and a half years, and I think it's great.

Speaker 6 But so far, it's a better search engine, and it helps high school kids cheat on their essays.

Speaker 3 I mean you don't believe that self-driving is coming? Hold on a second.

Speaker 2 Sachs, you don't believe that millions, but hold on.

Speaker 2 Those driver jobs weren't even there 10 years ago. Uber came along and created all these driver jobs.
DoorDash created all these driver jobs.

Speaker 2 So what technology does, yes, technology destroys jobs, but it replaces them with opportunities that are even better.

Speaker 2 And then either you can go capture that opportunity yourself or an entrepreneur will come along and create something that allows you to capture those opportunities. AI is a productivity tool.

Speaker 2 It increases the productivity of a worker. It allows them to do more creative work and less repetitive work.
As such, it makes them more valuable.

Speaker 2 Yes, there is some retraining involved, but not a lot. These are natural language computers.
You can talk to them in plain English and they talk back to you in plain English.

Speaker 2 But I think David is absolutely right. I think we will see job creation by AI that will be as fast or faster than job destruction.
You saw this even with the internet. Like YouTube came along.

Speaker 2 Look at all these YouTube streamers and influencers. That didn't used to be a job.
New jobs, really opportunities, because job is a wrong word. Job implies someone else has to give it to me.

Speaker 2 And it's sort of like they're handed out. It's a zero-sum game.
Forget all that. It's opportunities.

Speaker 2 After COVID, look at how many people are making money by working from home in mysterious little ways on the internet that you can't even quite grasp.

Speaker 6 Here's the way I categorize it, okay?

Speaker 6 Is that whenever you have a new technology, you get productivity gains, you get some job disruption, meaning that part of your job may go away, but then you get other parts that are new and hopefully more elevated, you know, more interesting.

Speaker 6 And then there is some job loss.

Speaker 6 I just think that the third category will follow the historical trend, which is that the first two categories are always bigger and you end up with more net productivity and more net wealth creation.

Speaker 6 And we've seen no evidence to date that that's not going to be the case. Now, it's true that AI is about to get more powerful.

Speaker 6 You're going to see a whole new wave of what are called agents this year, agentic products that are able to do more for you.

Speaker 6 But there's no evidence yet that those things are going to be completely unsupervised and replace people's jobs. So, you know, I think that we have to see how this technology evolves.

Speaker 6 And I think one of the mistakes of let's call it the European approach is assuming that you can predict the future with perfect accuracy, with such good accuracy that you can create regulations today.

Speaker 6 that are going to avoid all these risks in the future. And we just don't know enough yet to be able to do that.
That's a false level of certainty.

Speaker 5 I agree with you.

Speaker 5 And the companies that are promulgating that view is what Naval said, those that have an economic vested interest in at least convincing the next incremental investor that this could be true because they want to make the claim that all the money should go to them so they could hoover up all the economic gains.

Speaker 5 And that is the part of the cycle we're in.

Speaker 5 So if you actually stratify these reactions, there's the small startup companies in AI that believe there's a productivity leap to be had and that there's going to be prosperity.

Speaker 5 Everybody on the sidelines watching, and then a few companies that have an extremely vested interest in them being a gatekeeper because they need to raise the next $30 or $40 billion trying to convince people that that's true.

Speaker 5 And if you view it through that lens, you're right, Sachs. We have not accomplished anything yet that proves that this is going to be cataclysmically bad.

Speaker 5 And if anything, right now, history would tell you it's probably going to be like the past, which is generally productive and accretive to society.

Speaker 6 Yeah.

Speaker 6 And just to bring it back to JD's speech, which is where we started, I think it was was a quintessentially American speech in the sense that he said we should be optimistic about the opportunities here, which I think is basically right.

Speaker 6 And we want to lead. We want to take advantage of this.
We don't want to hobble it. We don't even fully know what it's going to be yet.
We are going to center workers. We want to be pro-worker.

Speaker 6 And I think that if... there are downsides for workers, then we can mitigate those things in the future.
But it's too early to say that we know what the program program should be.

Speaker 6 It's more about a statement of values at this point.

Speaker 3 Do you think it's too early, Freeberg, given Optimus and all these robots being created, what we're seeing in self-driving? You've talked about the ramp up with Waymo

Speaker 3 to actually say we will not see millions of jobs

Speaker 3 and millions of people get displaced from those jobs. What do you think, Freeberg? I'm curious your thoughts, because that is the counter-argument.

Speaker 4 My experience in the workplace is that AI tools

Speaker 4 that are doing things that an analyst or knowledge worker was doing with many hours in the past is allowing them to do something in minutes.

Speaker 4 That doesn't mean that they spend the rest of the day doing nothing.

Speaker 4 What's great for our business and for other businesses like ours that can leverage AI tools is that those individuals can now do more.

Speaker 4 And so our throughput, our productivity as an organization has gone up. And we can now create more things faster.

Speaker 4 So whatever the product is that my company makes, we can now make more things more quickly. We can do more development.

Speaker 2 You're seeing that on the ground, correct, Adahalo?

Speaker 4 And I'm seeing it on the ground. And I don't think that this like transplantation of how bad AI will be for jobs is the right framing as much as it is about an acceleration of productivity.

Speaker 4 And this is why I go back to the point about GDP per capita and GDP growth.

Speaker 4 Countries, societies, areas that are interested or industries that are interested in accelerating output, in accelerating productivity, the ability to make stuff and sell stuff, are going to rapidly embrace these tools because it allows them to do more with less.

Speaker 4 And I think that's what I really see on the ground.

Speaker 4 And then the second point I'll make is the one that I mentioned earlier, and I'll wrap up with a third point, which is: I think we're underestimating the new industries that will emerge drastically, dramatically.

Speaker 4 There is going to be so much new shit that we are not really thinking deeply about right now

Speaker 4 that we could do a whole nother two-hour brainstorming session on what AI unlocks in terms of large-scale projects that are traditionally or typically or today held back because of the constraints on the technical feasibility of these projects.

Speaker 4 And that ranges from accelerating to new semiconductor technology to quantum computing to energy systems to transportation to habitation, et cetera, et cetera.

Speaker 4 There's all sorts of transformations in every industry that's possible. as these tools come online.
And that will spurn insane new industries.

Speaker 4 The most important point is the third one, which is we don't know the overlap of job loss and job creation, if there is one.

Speaker 4 And so the rate at which these new technologies impact and create new markets, but I think Naval is right.

Speaker 4 I think that what happens in capitalism and in free societies is that capital and people rush to fill the hole of new opportunities that emerge because of AI, and that those grow more quickly than the old bubbles deflate.

Speaker 4 So if there's a deflationary effect in terms of job need, in other industries, I think that the loss will happen slower than the rush to take advantage of creating new things will happen on the other side.

Speaker 4 So my bet is probably on the order of I think new things will be created faster than old things will be lost.

Speaker 2 I think actually, as a quick side note to that, the fastest way to help somebody get a job right now, if you know somebody in the market who's looking for a job, the best thing you can do is say, hey, go download the AI tools and just start talking to them.

Speaker 2 Just start using them in any way. And then you can walk into any employer in almost any field and say, hey, I understand AI, and they'll hire you in the same way.
Exactly.

Speaker 3 Naval, you and I watched this happen. We had a front row seat to it.

Speaker 3 Back in the day, when you were doing venture hacks and I was doing Open Angel Forum, we had to like fight to find five or 10 companies a month. Then the cost of running these companies went down.

Speaker 3 They went down massively from 5 million to start a company to two, then to 250K, then to 100K.

Speaker 3 I think what we're seeing is like three things concurrently. You're going to see all these jobs go away for automation, self-driving cars, cashiers, et cetera.

Speaker 3 But we're going to also see static team size at places like Google. They're just not hiring because they're just having the existing bloated employee base learn the tools.

Speaker 3 But I don't know if you're seeing this. The number of startups able to get a product to market with two or three people and get to a million in revenue is booming.

Speaker 3 What are you seeing in the startup landscape?

Speaker 2 Definitely what you're saying in that there's leverage. But at the same time, I think the more interesting part is that new startups are enabled that could not exist otherwise.

Speaker 2 My last startup, AirChat, could not have existed without AI because we needed the transcription and translation.

Speaker 2 Even the current thing I'm working on, it's not an AI company, but it cannot exist without AI. It is relying on AI.
Even at Angelus, we're significantly adopting AI.

Speaker 2 Like everywhere you turn, it's more opportunity, more opportunity, more opportunity. And people like to go on

Speaker 2 Twitter or the artist formerly known as Twitter. And

Speaker 2 basically they like to exaggerate. Like, oh my God, we've hit AGI.
Oh, my God, I just replaced all my mid-level engineers. Oh, my God, I've stopped hiring.
To me, me, that's like moronic.

Speaker 2 The two valid ones are the one-man entrepreneur shows where there's like one guy or one gal and they're like scaling up like crazy thanks to AGI.

Speaker 2 Or there are people who are embracing AI and being like, I need to hire and I need to hire anyone who can even spell AI, like anyone who's even used AI. Just come on in, come on in.

Speaker 2 Again, I would say the easiest way to see that AI is not taking jobs or creating opportunities is go brush up on your AI, learn a little bit, watch a few videos, use the AI, tinker with it, and then go reapply for that job that rejected you and watch how they pull you in.

Speaker 6 In 2023, an economist named Richard Baldwin said, AI won't take your job. It's someone using AI that will take your job because they know how to use it better than you.

Speaker 6 And that's kind of become a meme and you see it floating around X, but I think there's a lot of truth in that.

Speaker 6 You know, as long as you remain adaptive and you keep learning and you learn how to take advantage of these tools, you should do better.

Speaker 6 And if you wall yourself off from the technology and don't take advantage of it, that's when you put yourself at risk.

Speaker 2 Another way to think about it is these are natural language computers. So everyone who's intimidated by computers before should no longer be intimidated.

Speaker 2 You don't need to program anymore in some esoteric language or learn some obscure mathematics to be able to use these. You can just talk to them and they talk back to you.
That's magic.

Speaker 3 The new programming language is English. Shamath, you want to wrap us up here on this opportunity? Slash displacement/slash chaos.

Speaker 5 I was going to say this before, but I'm pretty unconvinced anymore that you

Speaker 5 should bother even learning many of the hard sciences and maths that we used to as underpinnings. Like I used to believe that the right thing to do was for everybody to go into engineering.

Speaker 5 I'm not necessarily as convinced as I used to be because I used to say, well, that's great first principles thinking, et cetera, et cetera.

Speaker 5 And you're going to get trained in a toolkit that will scale. And I'm not sure that that's true.

Speaker 5 I think like you can use these agents and you can use deep research and all of a sudden they replace a lot of that skill. So what's left over?

Speaker 5 It's creativity, it's judgment, it's history, it's psychology, it's all of these other sort of like software. Leadership, communication, that allow you to manipulate these models in constructive ways.

Speaker 5 Because when you think of like the prompt engineering that gets you to great answers, it's actually just thinking in totally different orthogonal ways and non-linearly.

Speaker 5 So that's my last thought, which is it does open up the aperture, meaning for every smart mathematical genius, there's many, many, many other people who have high EQ.

Speaker 5 And all of a sudden, this tool actually takes the skill away from the person with just a high IQ and says, if you have these other skills now, you can compete with me equally.

Speaker 5 And I think that that's liberating for a lot of people.

Speaker 3 I'm in the camp of more opportunity. You know, I got to watch the movie industry a whole bunch.

Speaker 3 when the digital cameras came out and more people started making documentaries, more people started making independent film shorts.

Speaker 3 And then, of course, the YouTube revolution, people started making videos on youtube or podcasts like this and if you look at what happened with like the special effects industry as well we need far fewer people to make a star wars movie to make a star wars series to make a marvel series as we've seen now we can get the mandalorian ashoka and all these other series with smaller numbers of people and they look better than obviously the original star wars series or even the prequels so there's going to be so many more opportunities we're now making more tv shows more series everything we wanted to see of every little character.

Speaker 3 That's the same thing that's happening in startups. I can't believe that there is an app now, Naval, called Slopes, just for

Speaker 3 skiing. And there are 20 really good apps for just meditation.
And there are 10 really good ones just for fasting.

Speaker 3 Like, we're going down this long tail of opportunity, and there'll be plenty of million to $10 million businesses for us, you know, if people learn to use these tools.

Speaker 6 I love how that's the thing that tips you over.

Speaker 2 Which one?

Speaker 6 You get an extra Marvel movie or an extra Star Wars show. So that tips you over.

Speaker 6 I think for a lot of people, it feels great that AI may take over the world, but I'm going to get an extra Star Wars movie.

Speaker 2 I've been there. So I'm clear.
Yeah.

Speaker 3 I mean, are you not entertained?

Speaker 6 One final point on this is, look, I mean, given the choice between the two categories of techno-optimists and techno-pessimists, I'm definitely in the optimist camp.

Speaker 2 And

Speaker 6 I think we should be. But I think there's actually a third category that I I would submit, which is technorealist, which is technology is going to happen.

Speaker 6 Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will.
China's going to do it or somebody else will do it.

Speaker 6 And it's better for us to be in control of the technology, to be the leader, rather than passively waiting for it to happen to us. And I just think that's...

Speaker 2 always true.

Speaker 6 It's better for businesses to be proactive and take the lead, disrupt themselves instead of waiting for for someone else to do it. And I think it's better for countries.

Speaker 6 And I think you did see this theme a little bit. I mean, these are my own views.

Speaker 6 I don't want to ascribe them to the vice president, but you did see, I think, a hint of the technorealism idea in his speech and in his tweet, which is, look, AI is going to happen.

Speaker 6 We might as well be the leader. If we don't, we could lose in a key category that has implications for national security, for our economy, for many things.

Speaker 6 So that's just not a world we want to live in so i think a lot of this debate is sort of academic because

Speaker 6 whether you're an optimist or pessimist is sort of glass half empty half full the question is just is it going to happen or not and i think the answer is yes so then we want to control it this is you know let's just boil it down there's not a tremendous amount of choice in this i think i would agree heavily with one point and i would just tweak another the point i would agree with is that it's going to happen anyway and that's what deep seek proved you can turn off the flow of chips to them and you can turn off the flow of talent.

Speaker 2 What do they do? They just get more efficient and they exported it back to us. They sent us back the best open source model when our guys were staying closed source for safety reasons.

Speaker 2 Yeah, exactly. And I think it comes right back to the DeepSeek exploit.

Speaker 3 Safety of their equity.

Speaker 6 DeepSeek exploded the fallacy that the U.S. has a monopoly in this category and that somehow, therefore, we can slow down the train.
and that we have total control over the train.

Speaker 6 And I think what DeepSeek showed is, no, if we slow down the train, they're just going to win.

Speaker 2 Yeah.

Speaker 2 The part where I try to tweak a little bit is the idea that we are going to win.

Speaker 2 By we, when you say America, the problem is that the best way to win is to be as open, as distributed, as innovative as possible.

Speaker 2 If this all ends up in the control of one company, they're actually going to be slower to innovate than if there's a dynamic system. And that dynamic system, by its nature, will be open.

Speaker 2 It will leak to China. It will leak to India.
But these things have powerful network effects. We know this about technology.
Almost all technology has network effects underneath.

Speaker 2 And so even if you are open, you're still going to win and you're still going to control the world.

Speaker 6 No, no, you look at the internet. That was all true for the internet, right? The internet's an open technology.

Speaker 2 It's based on technology.

Speaker 2 But who's the best technology? But who are the dominant companies?

Speaker 6 All the dominant companies are U.S. companies because they were in the league.

Speaker 2 Exactly right. Exactly right.

Speaker 2 We embrace the open internet.

Speaker 4 We embrace the open internet.

Speaker 2 That was so there will be benefits for all of humanity.

Speaker 6 And I think the vice president's speech was really clear that, look, we want you guys to be on board. We want to be good partners.

Speaker 6 However, there are definitely going to be winners economically, militarily. And in order to be one of those winners, you have to be a leader.

Speaker 3 Who's going to get to AGI first, Naval? Is it going to be an open source? Who's going to win? Is it going to be open source or closed source?

Speaker 2 Who's going to win the day?

Speaker 2 If we're sitting here five, 10 years from now and we're looking at the top three language models, I'm going to get you in trouble for this, but I don't think we know how to build AGI.

Speaker 2 But that's a much longer question.

Speaker 3 Okay, put AGI aside. Who's going to have the best model five years from now?

Speaker 5 Hold on. I 100% agree with you.

Speaker 2 I just think it's a different thing. But what we're building are these incredible natural language computers.

Speaker 2 And actually, actually david in a very pithy way summarized the two big use cases it's search and it's homeworks it's paperwork it's it's really paperwork and these jobs that we're talking about disappearing are actually paperwork jobs they're paperwork shuffling these are made-up jobs like the federal government as we're finding out through doge you know a third of it is like people digging holes with spoons and the another third are filling them back up they're filling out paperwork and then burying it in a mine shaft they're bothering a mine shaft in iron mountain yeah so i i think a lot of these made-up jobshaft to to get the paperwork when someone retires and bring in.

Speaker 3 You know what? I'm going to get them some thumb drives. We can increase the throughput of the elevator with some thumb drives.
It would be incredible.

Speaker 2 What we found out is the DMV has been running the government for the last 70 years. It's been a compounding and a compounding.

Speaker 2 That's really what's going on.

Speaker 4 DMV is in charge.

Speaker 6 I mean, if the world ends in nuclear war, God forbid, the only thing that's be left will be the cockroaches and then a bunch of like government

Speaker 2 documents. TPS reports.
The TPS reports down in the mine shaft.

Speaker 3 Basically, yeah.

Speaker 3 Let's take a moment, everybody, to thank our czar.

Speaker 3 We miss him. We wish he could be here for the whole show.

Speaker 5 Thank you, Czar.

Speaker 2 Thank you to the Czar. Thank you, guys.

Speaker 3 We miss you. We miss you, little buddy.
I wish we could talk about Ukraine, but we're not allowed. Get back to work.
We'll talk about it another time over coffee.

Speaker 3 I'll see you in the commissary. Thanks for the invite.
Bye. Oh, man.
I'm so excited. I'm Naval.
Sax invited me to go to the military mess. I'm going to be in the commissary.

Speaker 2 No, he didn't, JKL.

Speaker 4 You invited yourself. Be honest.

Speaker 3 I did. Yes, I did.

Speaker 2 I put it on his calendar. To keep the conversation moving, let me segue a point that came up that was really important into tariffs.
And the point is,

Speaker 2 even though the internet was open, the U.S. won a lot of the internet.
A lot of U.S. companies won the internet.

Speaker 2 And they won that because we got there the firstest with the mostest, as they say in the military.

Speaker 2 And that matters because a lot of technology businesses have scale economies and network effects underneath. Even basic brand-based network effects.

Speaker 2 If you go back to the late 90s, early 2000s, very few people would have predicted that we would have ended up with Amazon basically owning all of e-commerce.

Speaker 2 You would have thought it would have been a perfect competition and very spread out. And that applies to how we ended with Uber as basically one taxi service, or we end up with Airbnb, Meta, Airbnb.

Speaker 2 It's just network effects, network effects, network effects rule the world around me. But when it comes to tariffs and when it comes to trade, we act like network effects don't exist.

Speaker 2 The classic Ricardian comparative advantage dogma says that you should produce what you're best at, I produce what I'm best at, and we trade.

Speaker 2 And then even if you want to charge me more for it, if you want to impose tariffs for me to ship to you, I should still keep tariffs down because I'm better off.

Speaker 2 You're just selling me stuff cheaply, great.

Speaker 2 Or if you want to subsidize your guys, great. You're selling me stuff cheaply.
The problem is that is not how most modern businesses work. Most modern businesses have network effects.

Speaker 2 As a simple thought experiment, suppose that we have two countries, right? I'm China, you're the U.S. I start out by subsidizing all of my companies and industries that have network effects.

Speaker 2 So I'll subsidize TikTok. I'll ban your social media, but I'll push mine.

Speaker 2 I will subsidize my semiconductors, which do tend to have winner-take-all in certain categories, or I'll subsidize my drones and then

Speaker 2 exactly, BYD, self-driving, whatever. And then when I win, I own the whole market and I can raise prices.
And if you try to start up a competitor, then it's too late. I've got network effects.

Speaker 2 Or if I've got scale economies, I can lower my price to zero, crash you out of business. No one in their right mind will invest and I'll raise prices right back up.

Speaker 2 So you have to understand that certain industries have hysteresis or they have network effects or they have economies of scale. And these are all the interesting ones.

Speaker 2 These are all the high-margin businesses. So in those, if somebody is subsidizing or they're raising tariffs against you to protect your industries and let them develop, you do have to do something.

Speaker 2 You can't just completely back down.

Speaker 3 What are your thoughts, Jimath, about tariffs and network effects? It does seem like we do want to have redundancy in supply chains. So there are some exceptions here.

Speaker 3 Any thoughts on how this might play out? Because, yeah, Trump brings up tariffs every 48 hours, and then it doesn't seem like any of them land. So I don't know.

Speaker 3 I'm still on my 72-hour Trump rule, which is whatever he says, wait 72 hours and then maybe see if it actually comes to pass. Where do you stand on all these tariffs and tariff talk?

Speaker 5 Well, I think the tariffs will be a plug. Are they coming? Absolutely.
The quantum of them, I don't know. And I think that the way that you can figure out

Speaker 5 how extreme it will be, it'll be based on what the legislative plan is for the budget. So there's two paths right now.

Speaker 5 Path one, which I think is a little bit more likely, is that they're going to pass a slimmed-down plan in the Senate just on border security and military spending.

Speaker 5 And then they'll kick the can down the road

Speaker 5 for probably another three or four months on the budget. Plan two is this one big, beautiful bill that's irking its way through the House.

Speaker 5 And there, they're proposing trillions of dollars of cuts. In that mode, you're going to need to raise revenues somehow, and especially if you're giving away tax breaks.

Speaker 5 And the only way to do that is probably through tariffs, or one way to do it is through tariffs. My honest opinion, Jason, is that I think we're in a very complicated moment.

Speaker 5 I think the Senate plan is actually on the margins more likely and better. And the reason is because I think that Trump is better off getting the next 60 to 90 days of data.

Speaker 5 I mean, we're in a real pickle here. We have persistent inflation.

Speaker 5 We have a broken Fed.

Speaker 5 They are totally asleep at the switch.

Speaker 5 And the thing that Yellen and Biden did, which in hindsight now was extremely dangerous, is they issued so much short-term paper that in totality, we have $10 trillion we need to finance in the next six to nine months.

Speaker 5 So it could be the case that

Speaker 5 we have rates that are like five, five and a quarter, five and a half percent.

Speaker 5 I think that that's extremely bad at the same time as inflation, at the same time as delinquencies are ticking up.

Speaker 5 So

Speaker 5 I think tariffs are probably going to happen.

Speaker 5 But I think that Trump will have the most flexibility if he has time to see what the actual economic conditions will be, which will be more clear in three, four, five months.

Speaker 5 And so I almost think this big, beautiful bill is actually counterproductive because I'm not sure we're going to have all the data we need to get it right.

Speaker 3 Freyberg, any thoughts on these tariffs? You've been involved in the global marketplace, especially when it comes to produce and wheat and all this corn and everything.

Speaker 3 What do you think the dynamic dynamic here is going to be? Or is it saber-rattling in a tool for Trump?

Speaker 4 The biggest buyer of U.S. ag exports is China.

Speaker 4 Ag exports are a major revenue source, major income source, and a major part of the economy for a large number of states.

Speaker 2 And so there will be,

Speaker 4 as there was in the first Trump presidency, very likely, very large transfer payments made to farmers because China is very likely going to tariff imports or stop making import purchases altogether, which is what happened during the first presidency.

Speaker 4 When they did that, the federal government, I believe, had transferred payments of north of $20 billion to farmers.

Speaker 4 This is a not negligible sum, and it's a not-negligible economic effect because there's then a rippling effect throughout the ag economy.

Speaker 4 So I think that's one key thing that I've heard folks talk about is the activity that's going to be needed to support the farm economy as the U.S.'s biggest ag customer disappears.

Speaker 4 In the early 20th century, we didn't have an income tax, and the federal revenue was almost entirely dependent on tariffs.

Speaker 4 When tariffs were cut, there was an expectation that there would be a decline in federal government revenue. But what actually happened is volume went up.

Speaker 4 So lower tariffs actually increase trade, increase the size of the economies.

Speaker 4 This is where a lot of economists take their basis in, hey, guys, if we do these tariffs, it's actually going to shrink the economy. It's going to cause a reduction in trade.

Speaker 4 The counterbalancing effect is one that has not been tested in economics, right?

Speaker 4 Which is what's going to happen if simultaneously we reduce the income tax and reduce the corporate income tax and basically increase capital flows through reduced taxation while doing the tariff implementation at the same time.

Speaker 4 So it's a grand economic experiment.

Speaker 2 And I think we'll learn a lot about what's going to happen here as this all moves forward.

Speaker 4 I do think ultimately many of these countries are going to capitulate to some degree and we're going to end up with some negotiated settlement that's going to hopefully not be too short-term impactful on the economies and the people and the jobs that are dependent on trade.

Speaker 5 Economy feels like it's in a very precarious place.

Speaker 2 It does to asset holders.

Speaker 2 And obviously they've left it in a bad place in the last administration and we shut down the entire country for a year over COVID and the bill for that has come due and that's reflected in inflation.

Speaker 2 I think there are a couple other points in tariffs. First is it's not just about money.
It's also about making sure we have a functional middle class with good jobs.

Speaker 2 Because if you have a non-tariff world, maybe all the gains go to the upper class and an underclass.

Speaker 2 And then you can't have a functioning democracy when the average person is on one of those two extremes. So I think that's one issue.
Another is strategic industries.

Speaker 2 If you look at it today, probably the largest defense contractor in the world is DJI. They got all the drones.
Even in Ukraine, both sides are getting all their drone parts from DJI.

Speaker 2 Now they're getting it through different supply chains and so on. But Ukrainian drones and Russian drones, the vast majority of them are coming through China through DJI.

Speaker 2 And we don't have that industry.

Speaker 2 If we have a kinetic conflict right now and we don't have good drone supply chain internally in the US, we're probably going to lose because those things are autonomous bullets.

Speaker 2 That's the future of all warfare. We're buying F-35s and the Chinese are building swarms of nanogrons

Speaker 2 at scale. So we do have to re-onshore those critical supply chains.
And what is a drone supply chain? It's not just, there's not a thing called drone. It's like motors and semiconductors and

Speaker 2 optics and lasers and and just everything across the board. So I think there are other good arguments for at least reshoring some of these industries.
We need them.

Speaker 2 And the United States is very lucky in that it's very autarkic. We have all the resources, we have all the supplies, we can be upstream of everybody with all the energy.

Speaker 2 To the extent we're importing any energy, that is a choice we made. That is not because fundamentally we lack the energy.

Speaker 2 Yeah, because of between all the oil resources and the natural gas and fracking, combined with all the work we've done in nuclear fission and small reactors, we should absolutely be energy industry.

Speaker 3 We should be running the table on it. We should

Speaker 3 have a massive surplus.

Speaker 3 And hey, you know, if you're, if you're worried about, you know, a couple of million of DoorDash Uber drivers losing their jobs to automation, like, hey, there's going to be factories to build these parts for these drones that we're going to need.

Speaker 3 So

Speaker 3 there's a lot of opportunity, I guess, for people.

Speaker 2 And there is a difference between different kinds of jobs.

Speaker 2 Those kinds of jobs are better jobs, building difficult things at scale physically that we need for both national security and for innovation.

Speaker 2 Those are better jobs than, you know, paperwork, writing essays for other people to read. Yeah.
Or even driving cars. All right.

Speaker 3 Listen, I want to get to two more stories here. We have a really interesting copyright story that I wanted to touch on.
Thompson Reuters just won the first major U.S.

Speaker 3 AI copyright case, and fair use played a major role in this decision. This has huge implications for AI companies here in the United States.

Speaker 3 Obviously, OpenAI and the New York Times, Getty Images versus Stability, we've talked about these, but it's been a little while because the legal system takes a little bit of time.

Speaker 3 And these are very complicated cases, as we've talked about. Thomson Reuters owns Westlaw.
If you don't know that, it's kind of like LexisNexis.

Speaker 3 It's one of the legal databases out there that lawyers use to find cases, et cetera. And they have a paid product with summaries and analysis of legal decisions.

Speaker 3 Back in 2020, this is two years before ChatGPT, Reuters sued a legal research competitor called Ross for copyright infringement. Ross had created an AI-powered legal search engine.
Sounds great.

Speaker 3 But Ross had asked Westlaw if they would pay a license to its content for training. Westlaw said no.
This all went back and forth. And then Ross signed a similar deal.

Speaker 3 with a company called Legal Ease. The problem is Legal Ease's database was just copied and pasted from a bunch of Westlaw answers.

Speaker 3 So Reuters, Westlaw, sued Ross in 2020, accusing the company of being precariously liable for legal eases, direct infringement, super important point.

Speaker 3 Anyway, the judge originally favored Ross and fair use. This week, the judge reversed this ruling and found Ross liable, noting that after further review, fair use does not apply in this case.

Speaker 3 This is the first major win, and we debated this. So here's a clip.
You know, you heard it here first on the all-in pod.

Speaker 3 What I would say is, you know, when you look at that fair use doctrine, I've got a lot of experience with it.

Speaker 3 You know, the fourth factor test, I'm sure you're well aware of this, is the effect of the use on the potential market and the value of the work.

Speaker 3 If you look at the lawsuits that are starting to emerge, it is Getty's right to then make derivative products based on their images. I think we would all agree.

Speaker 3 Stable diffusion, when they use these open web, that is no excuse to use an open web crawler to avoid getting a license from the original owner of that.

Speaker 3 Just because you can technically do it doesn't mean you're allowed to do it. In fact, the open web projects that provide these say say explicitly, we do not give you the right to use this.

Speaker 3 You have to then go read the copyright laws on each of those websites.

Speaker 3 And on top of that, if somebody were to steal the copyrights of other people, put it on the open web, which is happening all day long, you still, if you're building a derivative work like this, you still need to go get it.

Speaker 3 So it's no excuse that I took some site in Russia that did a bunch of copyright violation and then I indexed them for my training model. So I think this is going to result.

Speaker 5 Can you shoot me in the face and let me know when this happens?

Speaker 2 okay

Speaker 3 oh great

Speaker 3 so same way same way now exactly i know me too yeah okay good good segment let's move on

Speaker 3 well since these guys don't give a about copyright holders

Speaker 3 what do you think about uh you know i'm so glad you're here naval to actually talk about the topics these two other guys wouldn't have to be aware of

Speaker 2 even thinner limb i'm going to go to an even thinner limb and say i largely agree with you i think it's a bit rich to crawl the open web hoover up all the data offer direct substitution for a lot of use cases because now you start and end with the AI model.

Speaker 2 It's not even like you link out like Google did. And then you just close off the models for safety reasons.
I think if you trained on the open web, your model should be open source.

Speaker 3 Yeah, absolutely. That would be a fine thing.
I have a prediction here. I think this is all going to wind up like the Napster Spotify case.

Speaker 3 For people who don't know, Spotify pays, I think, 65 cents on the dollar to the original underwriters of that content, the music industry. And they figured out a way to make a business.

Speaker 3 And Napster is roadkill. I think that there is a non-zero chance, like it might be five or 10%,

Speaker 3 that OpenAI is going to lose the New York Times lawsuit and they're going to lose it hard. And there could be injunctions.

Speaker 3 And I think it's the settlement might be that these language models, especially the closed ones, are going to have to pay some percentage in a negotiated settlement of their revenue,

Speaker 3 half, two-thirds to the content holders. And this could make the content industry have a massive, massive uplift and a massive resurgence.

Speaker 5 I think that the problem,

Speaker 5 there's an example on the other side of this, which is that there's a company that provides technical support for Oracle, third-party company.

Speaker 5 And Oracle has tried umpteen times to sue them into oblivion, using copyright infringement as part of the justification.

Speaker 5 And it's been a pall over the stock for a long time. The company's name is Rimini Street.
Don't ask me why it's on my radar, but I just have been looking at it.

Speaker 5 And they lost this huge lawsuit, Oracle One, and then it went to appellate court and then it was all vacated. Why am I bringing this up?

Speaker 5 I think that the legal community has absolutely no idea how these models work because you can find one case that goes one way and one case that goes the other.

Speaker 5 And what I would say should become standard reading for anybody bringing any of these lawsuits.

Speaker 5 There's an incredible video that Karpathy just dropped, that Andre just dropped, where he does like this deep dive into LLMs and he explains chat GPT from the ground up. It's on YouTube.

Speaker 5 It's three hours. It's excellent.
And it's very difficult to watch that and not get to the same conclusion that you guys did. I'll just leave it at that.

Speaker 5 I tend to agree with this.

Speaker 2 There's also a good old video by Ilya Sutskover, where he was, I believe, the founding chief scientist or CTO of OpenAI.

Speaker 2 And he talks about how these large language models are basically extreme compressors.

Speaker 2 And he models them entirely as their ability to compress. And they're lossy compression.
Lossy compression, exactly. Lossy, lossy compression.

Speaker 2 Exactly. Exactly.
So, and Google got sued for fair use back in the day, but the way they managed to get past the argument was they were always linking back to you. They showed Tiny Press.

Speaker 2 They provided you the traffic. They sent you the traffic.

Speaker 5 This is lossy compression. It is absolutely, I'm now on your pay.
I hate to say this, Jason.

Speaker 2 I agree with you.

Speaker 5 You were right.

Speaker 2 You were right. That's all I wanted to hear all these years.

Speaker 2 That's all I wanted to do. That's all I wanted to hear.
That's all I was shaking my head when I saw those videos because I was like, oh man, Jason was right.

Speaker 3 Jason was right.

Speaker 5 Oh, my God.

Speaker 3 No, I just, I've been through this so many times that these, I think this is, you know,

Speaker 3 Rupert Murdoch said we should hold the line with Google and not allow them to index our content without a license. And Google navigated it successfully, and they were able to not get him to stop.

Speaker 3 I think

Speaker 3 what's happened now is that the New York Times remembers that. They all remember losing their content and these snippets and the one box to Google, and they couldn't get that genie back in the bottle.

Speaker 3 I think the New York Times realizes this is their payday. I think the New York Times will make more money from licenses from LLMs than they will make from advertising or subscriptions eventually.

Speaker 3 This will renew the model.

Speaker 2 Almost. I think most New York Times content is worthless to an LLM, but that's a different story.
I think they actually

Speaker 2 don't have a political reason, whatever.

Speaker 3 But I can tell you, as a user, I loved the wirecutter. I think you knew Brian and everybody over the wire cutter.
That was like such an inventory.

Speaker 2 Fair enough. Yeah, wirecutter.
What a great product.

Speaker 3 I used to pay for the New York Times. I no longer pay for the New York Times.
My main reason was I would go to the wirecutter.

Speaker 2 Yeah.

Speaker 3 And I would just buy whatever they told me to buy. Now I go to ChatGPT, which which I pay for.
And ChatGPT tells me what to buy based on the wire cutter. So it's it, and I'm already paying for it.

Speaker 3 So I stopped paying for it.

Speaker 4 I philosophically disagree with all of your nonsense on this topic.

Speaker 2 All three of you are wrong.

Speaker 4 And I'll tell you why.

Speaker 4 Number one, if information is out in the open internet,

Speaker 4 I believe it's accessible and it's viewable. And I view an LLM or a web crawler as basically being a human that's reading and can store information in its brain.

Speaker 4 If it's out there in the open, if it's behind a paywall, 100%, if it's behind some protected password.

Speaker 2 Wait, wait, wait, wait, David, David.

Speaker 2 In that case, can a Google crawler just crawl an entire site and serve it on Google? Why can't they do that?

Speaker 4 So here's the fair use. The fair use is you cannot copy, you cannot repeat the content.
You cannot take the content and repeat it.

Speaker 2 That is how the law is currently written.

Speaker 2 But now what I have is I have a tool that can remix it with 50 other pieces of similar content and I can change the words slightly and maybe even translate into different language.

Speaker 2 So where does this stop?

Speaker 4 Do you know the musical artist Girl Talk? We should have done a Girl Talk track here today.

Speaker 3 He's got musical tastes.

Speaker 2 Here we go.

Speaker 4 He basically takes small samples of popular tracks and he made, and he got sued for the same problem. There was another guy named White Panda, I believe, had the same problem.

Speaker 4 Ed Sheeran got sued for this.

Speaker 2 Yeah, but there are entire sites like Stack Overflow and Wikihow that are basically disappeared now because you can just swallow them all up and you can just spit it all back out in ChatGPT with slight changes.

Speaker 2 So,

Speaker 4 I think that the first thing is the fair use is how much of a slight change is exactly the right question.

Speaker 2 Yeah, which is how much are you changing? Yeah, so that's the question.

Speaker 2 And it actually buzzed out of the AGI question. Are these things actually intelligent and are they learning or are they compressing and regurgitating? That's the question.

Speaker 4 I wonder this about humans, and that's why I bring up the white panda, the girl talk, and audio, but also visual art.

Speaker 4 There was always artists, and even in classical music, I don't know if you guys are classical music people, but right, there was a demonstration of how

Speaker 4 one composer learned from the next. And you can actually track the music as kind of being standing on the shoulders of the prior.

Speaker 4 And the same is true in almost all art forms and almost all human knowledge.

Speaker 2 And maybe that's right.

Speaker 2 It's very hard to figure that out.

Speaker 4 Well, that's exactly right. That's the hardest thing.

Speaker 2 It's very hard to figure that out, which is why I come back to there's only one of two stable solutions to this. And it's going to happen anyway.

Speaker 2 If we don't crawl it, the Chinese will crawl it, right? DeepSeek proved that. So there's only one of two stable solutions.

Speaker 2 Either you pay the copyright holders, which I actually think doesn't work, and the reason is because someone in China will crawl it and they just dump the weights, right?

Speaker 2 So they can just crawl and dump the compressed weights. Or if you crawl, make it open.
At least contribute something back to open source, right? You crawled open data, contributed back to open source.

Speaker 2 And the people who don't want to be crawled, they're going to have to go to huge lengths to protect their data. Now everybody knows to protect the data.

Speaker 2 Yeah, well, the licensing

Speaker 3 thing is happening here. I have a book out from Harper Business on the shelf behind me, and I'm getting 2,500 smackaroos for the next three years for Microsoft indexing it.

Speaker 3 So they're going out and they're licensing this stuff and they're getting $2,500.

Speaker 3 So you're going to be literally I'm getting $2,500 for three years, a bunch of Harper to go into an LLM. To go into Microsoft specifically.
And you know what?

Speaker 3 I'm going to sign it, I decided, because I just want to set the precedent. Maybe next time it's $10,000.
Maybe next time it's $250,000. I don't care.

Speaker 3 I just want to see people have their content respected. And I'm just hoping that Sam Waltman loses this lawsuit and they get an injunction against him.

Speaker 2 Hey,

Speaker 3 well, just because he's just such a weasel in terms of like making stuff open AI into a closed thing.

Speaker 2 I mean, I like Sam personally,

Speaker 3 but I think what he did was like the super weasel move of all time for his own personal benefit. If he, if he, and this whole lying, like, oh, I have no equity.
I get health care. He does it.

Speaker 2 And now I get 10%.

Speaker 5 No, bro. He does it.

Speaker 2 But he does it for the love?

Speaker 5 What was the statement? He does it for the... I do

Speaker 2 benefit.

Speaker 5 The benefits.

Speaker 3 I think he got healthcare.

Speaker 2 I think in OpenAS Defense, they do need to raise a lot of money and they got to incent their employees. But that doesn't mean they need to take over the whole thing.

Speaker 2 The non-profit portion can still stay the non-profit portion and get the lion's share of the benefits and be the board.

Speaker 2 And then he can have an incentive package and employees can have an incentive package.

Speaker 3 Yeah, why don't they get a percentage of the revenue?

Speaker 2 Just get some money.

Speaker 2 I don't understand why a house is being bought out right now for $40 billion and then the whole thing disappears into

Speaker 2 That part makes no sense to me.

Speaker 3 That's called a shell game and a scam.

Speaker 2 Yeah, I think Sam and his team would do better to leave the nonprofit part alone, leave an actual independent nonprofit board in charge, and then have a strong incentive plan and a strong fundraising plan for the investors and the employees.

Speaker 2 So I think this is workable.

Speaker 2 It's just trying to grab it all, it just seems way off, especially when it was built on open algorithms from Google, open data from the web, and on a nonprofit funding from Elon and others.

Speaker 3 I mean, what a great proposal. Like we just workshopped here.
What if they just said, what do they make? $6 billion a year? Just take 10% of it, 600 million every year, and that goes into

Speaker 5 a bonus. They're losing money, Jason.
So they have to.

Speaker 3 Okay, eventually they'll.

Speaker 2 No, but even equity. They could give equity to the people building it, but they could still leave it in the control of the non-profit.
I just don't understand this conversion.

Speaker 2 I mean, there was a board coup, right? The board tried to fire Sam, and Sam took over the board. Now it's his handpicked board.
So it also looks like self-dealing, right?

Speaker 2 And yeah, they'll get an independent valuation, but we all know that game. You hire an evaluation expert who's going to say what you're going to say and they'll check a box.

Speaker 2 If they're going to capture the life code of all future value or build super intelligence, you know, that's worth a lot more. That's why Elon just spit a hundred billion.

Speaker 5 Exactly.

Speaker 5 You're saying, you're saying the things that actually the regulators and the legal community have no insight because they'll see a fairness opinion and they think, oh, it says fairness and opinion, two words side by side.

Speaker 5 It must be fair. And they don't know how all of this stuff is gamed.
So yeah.

Speaker 3 Yeah.

Speaker 3 Man, I got stories about 409As that would exactly.

Speaker 2 Yeah. Everything is everything is gamed.

Speaker 5 409As are gamed. These fairness opinions are gamed.
But the

Speaker 5 reality is I don't think the legal and the judicial community has any idea.

Speaker 3 I mean, imagine if a founder you invested in, this is just a total imaginary situation, Naval, had like a great term sheet at some incredible dollar amount, didn't take it.

Speaker 3 ran the valuation down to like under a million, gave themselves a bunch of shares, and then took it three months later. I don't know.
What What would that be called?

Speaker 3 Securities for all?

Speaker 3 Yeah, let's wrap on your story.

Speaker 5 I had an interesting, Nick will show you the photo. I had an interesting dinner on Monday with Brian Johnson, the don't die guy.
Came over to my house.

Speaker 3 How's his erection doing overnight?

Speaker 5 What we talked about is he's got three hours a night of nighttime erections.

Speaker 3 Wow. Look at this.

Speaker 5 By the way, first of all, I'll tell you, I think that

Speaker 5 he's

Speaker 2 Kuhn. Wait, which one of those is giving him the erection?

Speaker 5 No, no, no.

Speaker 2 He measures his nighttime erections.

Speaker 3 I think Kuhn is giving him the erection.

Speaker 5 But he said that when he started. So by the way, he said he was 43 when he started this thing.
He was basically clinically obese. Yeah.
And in these next four years, has become a specimen.

Speaker 5 He now has three hours a night of nighttime erections, but that's not the interesting thing.

Speaker 5 At the end of this dinner, by the way, his skin is incredible.

Speaker 5 I was not sure because when you see the pictures online, but his skin in real life is like a porcelain doll's. I've, both my wife and I were like, we've never seen skin like this.

Speaker 5 And it's incredibly soft.

Speaker 3 Wait, wait, wait, wait, whoa, whoa, whoa. How do you know his skin is soft?

Speaker 5 You know, you brush your hand against his forearm or whatever. You know, he gives a hug at the end of the night.
I'm telling you, the guy

Speaker 3 had supple skin.

Speaker 5 Bro, it's the softest skin I've ever touched in my life. Anyways, that's not the point.

Speaker 5 It was really fascinating dinner. He walked through his whole protocol.
But at the end of it, I think it was Nikesh, the CEO of Palo Alto Networks. He was just like, give me the top three things.

Speaker 2 Top three.

Speaker 5 And of the top three things,

Speaker 5 what I'll boil it down to is the top one thing, which is like 80% of the 80%.

Speaker 2 It's all about sleep.

Speaker 3 I was about to guess, sleep.

Speaker 5 And he walked through his nighttime routine, and it's incredible. And it's straightforward.
It's really simple. It's like how you do a wind down.
Anyways, I have tried to.

Speaker 3 Explain the wind down.

Speaker 3 Briefly.

Speaker 5 Let's just say that, because Brian goes to bed much earlier. So our normal time.
let's just say, you know, 10, 10.30. So my time, I try to go to bed by 10.30.
He's like, you need to be in bed.

Speaker 5 You need to, first of all, stop eating three or four hours before, right? And I do that. I eat at 6.30.
So I have about three hours.

Speaker 5 You're in bed by 9.30 or 10.

Speaker 5 You deal with the self-talk, right? Like, okay, here's the active mind telling you all the things you have to fix in the morning.

Speaker 2 Talk it out.

Speaker 5 put it in its place. Say, I'm going to deal with this in the morning.

Speaker 3 Write it down in a journal you're saying.

Speaker 5 Whatever you do so that you put it away.

Speaker 5 you cannot be on your phone that's got to be in a different room it's or you just got to be able to shut it down and then read a book so that you're actually just engaged in something and and and he said that he typically falls asleep within three to four minutes of getting into bed and starting what

Speaker 5 i tried it so i've been doing it since i had dinner with him on monday last night i fell asleep within 15 minutes

Speaker 5 the hardest part for me is to put the phone away i can't do it of course of course what about you, Neval?

Speaker 3 Tell us your one down.

Speaker 2 Oh, yeah. So I know Brian pretty well, actually.
And I joke that I'm married to the female Brian Johnson because my wife has some of his routines.

Speaker 2 But she's the natural version, no supplements, and she's intense.

Speaker 2 And I think when Brian saw my sleep score from my

Speaker 2 eighth sleep, he was shocked.

Speaker 2 He was just like, you're going to die. He's like, you're literally going to die.
What are you guys like? 70, 80? No, it's terrible. It's awful.
But it's not.

Speaker 3 What's your number? What's your number?

Speaker 2 It was like his 30s, 40s. But, you know, yeah.
But it's also because I don't sleep much. I only sleep a few hours a night and I also move around a lot in the bed and so on.
But it's fine.

Speaker 2 I never have trouble falling asleep. But I would say that Brian's, yes, skincare routine is amazing.
His diet is incredible. He is a genuine character.

Speaker 2 I do think a lot of what he's saying, minus the supplements. I'm not a big believer in supplements.

Speaker 5 Yeah.

Speaker 2 Does work. I don't know if it's necessarily going to slow down your aging, but you'll look good.
You'll feel good. Yeah, sleep is the number one thing.

Speaker 2 In terms of falling asleep, I don't think it's really about whether you look at your phone or not, believe it or not. I think it's about what you're doing on your phone.

Speaker 2 If you're doing anything that is cognitively stressful or getting your mind to spin, then yes, you think you can scroll TikTok and fall asleep is fine.

Speaker 2 Anything that's entertaining or that is like you could read a book. right on your Kindle or on your iPad and I think it'd be fine falling asleep.

Speaker 2 Or you can listen to like some meditation video or some spiritual teacher or something and that'll actually help you fall asleep.

Speaker 2 But if you're on X or if you're checking your email, then heck yeah, that's going to keep you up. So my hack for sleep is a little different.
I normally fall asleep within minutes.

Speaker 2 And the way I do it is

Speaker 2 you all have a meditation.

Speaker 5 You have a set time?

Speaker 2 No, no, I sleep whenever I feel like. Usually around one in the morning, two in the morning.

Speaker 5 God damn, I'm in bed by 10. Yeah, I need to sleep.
I'm an owl.

Speaker 2 But if you want to fall asleep, the hack I've found is everybody has tried some kind of a meditation routine. Just sit in bed and meditate.

Speaker 2 And your mind will hate meditation so much that if you force it to choose between the fork of meditation and sleeping, you will fall asleep almost every time. Well, okay, so after that.

Speaker 2 And if you don't fall asleep, you end up meditating, which is great too.

Speaker 3 So just like the meditation.

Speaker 5 The coda to this story was a friend of mine came to see me from

Speaker 5 the UAE, and he was here on Tuesday, and I was telling him about the dinner with Brian. And he told me the story because he's friends with Khabib, the UFC fighter.

Speaker 5 And he says, you you know, when Khabib goes to his house, he eats anything and everything, fried food, pizzas, whatever, but he trains consistently.

Speaker 5 And my friend Adala says, how are you able to do that? And how does it not affect your physiology? He goes, I've learned since I was a kid.

Speaker 5 I sleep three hours after I train in the morning and I sleep 10 hours at night. And I've done it since I was like 12 or 13 years old.

Speaker 2 That's a lot of sleep.

Speaker 5 It's a lot of sleep.

Speaker 3 You know, the direct correlation for me is if I

Speaker 3 do something cognitively, like, you know, big heavy-duty conversations or whatever, so no heavy conversations at the end of the night, no existential conversations at the end of the night.

Speaker 3 And then if I go rucking, I have the, you know, on the ranch, I put on a 35-pound weight vest. I would say that you do that at night before you go to bed.
No, no, no.

Speaker 3 If I do it anytime during the day, I typically do it in the morning or the afternoon, but the one to two-mile ruck with the 35 pounds, whatever it is, it just tires my whole body out.

Speaker 3 So that when I do lay down.

Speaker 5 Is that why you don't prepare for the pod?

Speaker 2 You know,

Speaker 2 I mean, this pot is the top 10 pot in the world, Shaman.

Speaker 3 Do you think it's an accident?

Speaker 5 Freeberg, what's your sleep routine? Can you just go to bed? You just like to get a bad thing.

Speaker 2 I take a warm bath and I send J.

Speaker 3 Cal a picture of my feet.

Speaker 4 I'll wait till J. Cal's done.

Speaker 4 I do take a nice warm bath.

Speaker 3 I nailed it.

Speaker 5 But you do it every night, a warm bath?

Speaker 4 Yeah, I do a warm bath every night.

Speaker 3 With candles, too.

Speaker 5 And do you do it right before you go to bed?

Speaker 4 Yeah, I usually do it after I put the kids down. And I'll basically start to wind down for bed.

Speaker 4 I do watch TV sometimes, but I do have the problem and the mistake of looking at my phone probably for too long before I turn the lights off.

Speaker 5 So do you have a consistent time where you go to bed or not?

Speaker 4 Usually 11 to midnight and then up at 6.30.

Speaker 5 Man,

Speaker 5 I need eight hours. Otherwise, I'm a mess.

Speaker 2 I'm trying to get eight.

Speaker 3 I hit between six and seven consistently. I try to go to bed that 11 to 1 a.m.
window and get up the 7 to 8 window.

Speaker 4 My problem is if I have work to do, I'll get on the computer or my laptop. And then when I start that after in my evening routine, I can't stop.

Speaker 4 And then all of a sudden it's like 3 in the morning and I'm like, oh no, what did I just do? And then I still have to get up at 6.30. So that does happen to me.

Speaker 2 So last night was unusual for me, but it was kind of funny anyway. I thought, oh, I should go to bed early because I'm an all-in.
Yeah. But I ended up eating ice cream with the kids late.

Speaker 3 Wait, what was the brand?

Speaker 2 You say you went for another brand. I want to know what the brand is.
I think it's Van Lewin or something like that.

Speaker 2 It's so good. It's New York and Brooklyn.
That's good.

Speaker 2 The holiday cookies and cream. Oh my God.
So good. Yeah, it's so good.

Speaker 2 After I polished that off, then I was like, oh, I probably ate too much to go to bed, so I better work out. So I did a kettlebell workout.

Speaker 2 You sound like Jamath. What did you say?

Speaker 2 I have eight kettlebells right here, right next to my christmas.

Speaker 2 Free bird, this This is called Working Out Freebird.

Speaker 2 And then while I'm doing my kettlebell suitcase carry, I was texting with an entrepreneur friend. So you can tell how intense my workout was.

Speaker 2 And he's in Singapore, so it was in the middle of the night for me and early for him.

Speaker 2 And it was time to go to bed.

Speaker 2 I was like, okay, now I got to get to bed. How do I get to bed? My body's all amped up.
I've got food in my stomach. Just some kettlebells.

Speaker 2 My brain is all amped up. And all in podcasts is tomorrow.
And what time is is it? It's 1.30 in the morning. I better get to bed.

Speaker 2 So I put on like a little, one of those spiritual videos to calm me down. And then I just, and then I got in bed and I was like, there's no way I'm falling asleep.
And I started meditating.

Speaker 2 And five minutes later, I was asleep.

Speaker 3 You know, actually, the Dalai Lama has these great on his YouTube channel. He's got these great like two-hour discussions.
You get about 20, 30 minutes into that, you will fall asleep.

Speaker 2 Well, yeah, but my learning is.

Speaker 4 Yeah, watch any Dharma lecture from the SSL.

Speaker 2 Yeah, exactly. Exactly.
And my lesson is, my learning is that the mind will do anything to avoid meditation.

Speaker 4 Yes. By the way, did you guys see just before we wrap, did you see all the confirmations? RFK Jr.
confirmed. Brooke Rowlands confirmed.

Speaker 2 By the way, if you look at Polymarket, Polymarket had it all right a couple of weeks ago.

Speaker 2 I was trying to Polymarket. There was a moment where TC fell to like 56%.
There was a moment when RFK fell to 75%, but then they bounced back and it was done. You could have bought it.

Speaker 2 You got to snipe that, man. You could have made money.
Yeah, Polymarket had it.

Speaker 4 And the media was like, no way he's getting confirmed.

Speaker 2 This is not going to happen.

Speaker 4 But Polymarket knows. It's so interesting, huh?

Speaker 2 Well, I saw a very insightful tweet, and I forget who wrote it, so I'm sorry I can't give credit. But the guy basically said, look, Trump has a narrow majority in the House and the Senate.

Speaker 2 And he can get everything he wants as long as the Republicans stay in line. So all the pressure and all the anger that all the mega movement is doing against the left is pointless.

Speaker 2 It's all about keeping the right wing in line. So it's all the people saying to the senators, hey, I'm going to primary you.
It's Nicole Shanahan saying, I'm going to primary you.

Speaker 2 It's Scott Pressler saying, I'm moving to your district. That's the stuff that's moving the needle and causing the confirmations to go through.
That's how you get Cash Patel.

Speaker 2 That's how you get Tulsi Gabbard, the DNI. That's how you get RFP.
Do you worry about that?

Speaker 3 Do you think any of these? Do you think any of them are too spicy for your taste, or you just like the whole burn it down, put in the crazy like outsiders? And

Speaker 4 That's such a bad characterization. That's not a character.
I mean, whatever.

Speaker 3 I mean, the outside.

Speaker 2 Honestly, it's like, I never thought I'd see it.

Speaker 2 But I think between Elon and Sachs and people like that, we actually have builders and doers and financially intelligent people and economically intelligent people in charge.

Speaker 2 And, you know, despite all the craziness, Elon's not doing this for the money. He's doing it because he thinks it's the right thing to do.
Of course. And having.

Speaker 5 He moved into the Roosevelt.

Speaker 2 I think like many of us, I had bought into the great forces of history mindset where it's just like, okay, it's inevitable. This is what's happening.
Government always gets bigger, always gets slower.

Speaker 2 And we just have to try and get stuff built before they just shut everything down and we turn into Europe. But the thing that happened then was, you know, Caesar crossed the Rubicon.

Speaker 2 The great man theory of history played out and we're living in that time. And it's an inspiration to all of us, despite Sam Altman and Elon's current fighting.

Speaker 2 I know Sam was inspired by Elon at one point. And I think all of us are inspired by Elon.
I mean, the guy can be the Diablo player and do Doge and run SpaceX and Tesla and Boring and Neuralink.

Speaker 2 I mean, it's incredibly impressive. It makes us, that's why I'm doing a hardware company now.
It makes me want to do something useful with my life.

Speaker 2 You know, Elon always makes me question, am I doing something useful enough with my life? It's why I don't want to be an investor.

Speaker 2 You know, Peter Thiel, ironically, he's an investor, but he's inspirational that way too, because he's like, yeah, the future just doesn't just happen. You have to go make it.

Speaker 2 So, you know, we get to go make the future. And I'm just glad that Elon and Doge and others are making the future that I'm.
Hardware, what do we got going on here?

Speaker 2 Maybe I'll reveal it on the all-in podcast in a couple of months, but it's really

Speaker 2 difficult. I'm not sure I can pull it off.
So let me try. Let me just make sure it's viable.
Is it drone-related?

Speaker 3 Is it self-driving related?

Speaker 2 Drones are cool, but no, it's not. Maybe all-in podcast should be an angel investor.
Oh, yeah. Absolutely.
Yeah. Let's do it.

Speaker 2 No syndicate, Jason.

Speaker 3 Just our money.

Speaker 2 You know how I learned about syndicates? It was Naval.

Speaker 3 The first syndicate I ever did on Angel List, I think is still the biggest.

Speaker 2 I don't know.

Speaker 3 5%, and Naval's my partner on this for calm.com.

Speaker 2 I think you'll love what I'm working on if I pull it off. I think you guys will love it.
I'd love to show you a demo.

Speaker 3 Let us know where to send the check. Get that big cherry chip, Van Leeuwen.

Speaker 5 I love you guys. What have we learned?

Speaker 2 I got to go. Okay.

Speaker 5 Big shout out to Bobby and to Tulsi. That's a huge, huge huge win for America.

Speaker 3 I'm stoked about both of them.

Speaker 2 Congratulations. I love me.

Speaker 2 Bobby Kennedy. Let's get Bobby Kennedy back on the podcast.

Speaker 3 Let's get Bobby. Hey, Bobby, come back on the pod.
Four,

Speaker 3 the czar, David Sachs,

Speaker 3 your Sultan of Science, David Freeberg, the Chairman Dictator, Chamath Polyhapatita, Polyhapatia,

Speaker 3 and Namaste Naval. I am the world's greatest moderator, and I'll see you next time on the Almond Pod.

Speaker 2 Namaste, bitches. Bye-bye.

Speaker 2 Brain Man David Sachs.

Speaker 2 And it said, we open source it to the fans, and they've just gone crazy with it. Love you, I'm the queen of Kino.
What's going on?

Speaker 2 Besties are gone.

Speaker 2 Oh, man. My avatar will meet me at the end.
We should all just get a room and just have one big huge orchief because they're all just useless.

Speaker 2 It's like this like sexual tension that they just need to release somehow.

Speaker 2 What? You are the

Speaker 1 We need to get Mercy's army.

Speaker 1 I'm going all in.