Biggest LBO Ever, SPAC 2.0, Open Source AI Models, State AI Regulation Frenzy
(0:00) Bestie intros!
(1:53) EA acquired for $55B in biggest LBO ever, why PE is in trouble
(17:42) IPO market, SPAC 2.0
(27:41) The AI rollup opportunity
(36:01) Sacks joins the show!
(38:27) OpenAI and Meta launch short-form video apps: "AI Slop" or the future of content?
(45:04) Open source AI: DeepSeek's new model, pressure on US AI industry
(1:05:11) State AI regulation frenzy: States' rights vs Federal control, overregulation
Follow the besties:
Follow on X:
Follow on Instagram:
https://www.instagram.com/theallinpod
Follow on TikTok:
https://www.tiktok.com/@theallinpod
Follow on LinkedIn:
https://www.linkedin.com/company/allinpod
Intro Music Credit:
Intro Video Credit:
Referenced in the show:
https://x.com/Jason/status/1973461806585966655
https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai
Listen and follow along
Transcript
All right, everybody, welcome back to the number one podcast in the world.
Of course, that's the all-in podcast.
I'm your host, Jason Calakanis, with me again, your chairman dictator Chamoth Palihapatiya and the Sultan of Science, David Freeberg.
David Sachs will be calling in from the Skift.
He's in some deep negotiations for the United States of America.
From the Skiff.
There's no T at the end.
It's just Skiff.
No T.
Skiff to my left.
Okay.
He's in a Skiff
doing something with his BlackBerry and a bunch of generals.
Nobody knows what's what's going on in Sachs's life, but he'll crack in from the skiff any moment now.
But we'll start.
You guys see that Pete Heggs has announced a PT and a fitness test for the generals.
Could you imagine if Sachs had to pass a PTO?
Oh, my God.
They should totally make it for the administration.
Sax, we need you to do one push-up, Sachs.
What do they do if you don't pass?
They remove you from your...
You probably get a cure period.
We should do a push-up contest.
That would be great.
Winner take all.
How many push-ups can you do, Freeberg?
You have to adjust for people's heights.
I'm the tallest of all of you.
I have a much longer limb system.
What does that mean?
So 20 for me is much harder than 20 for you, Jason.
I mean, 20 is easy for me at this point.
Yeah, you're like Bill Bull Baggins.
It'll take you like eight seconds to put on a bag.
Bill Bull tank, man.
Bilbo Baggins.
How about Thor?
I'm going to Thor at this point.
I'm going to my Daniel Craig era.
Jake L has an elbow.
I'm going to my Daniel Craig era.
To weight ratio, that's highly advantaged.
All right, let's get started.
Enough shenanigans.
EA is.
What is your arm link, JKL?
Do you have a good arm length?
My wingspan?
My wingspan technically is enough to kick your ass with one hand tied behind my back.
That's actually the good one.
Let your winners ride.
Brain man David Saxon.
And that said, we open source it to the fans, and they've just gone racing with it.
Love you as I see queen of kid walk.
Okay, EA is being taken private in the largest take-private deal in history.
$55 billion.
Man, that just stacks up to,
let's see, Texas Power Company in 2007, HCA Healthcare, 33 billion.
This is a large deal.
Investors in the take-private include Saudi's PIF, Silverlake, and Friend of the Pod, Jared Kushner's Affinity Partners, $210 a share, 25 premium on the stock.
Kushner's largest LP at Affinity, as you know, Saudi PIF as well.
The PIF has invested over $900 billion.
You know, many of the things, Lucid Motors, Live Golf, the Softback Vision Fund, Uber back in the day, Newcastle, the Premier League.
Electronic Arts obviously is in the video game business.
They were founded at Sequoia's office in 1982 in San Mateo.
Shout out to our guy, Rulof Botha, who joined us for the All-In Summit.
Their headquarters still in Redwood City.
Madden NFL,
The Sims.
Oh, that's why you have the background, The Sims this week.
Need for speed.
Pretty insane
deal here, Chamoth.
And this is a high watermark for private equity.
Anyway, you look at it, and the PIF loves games.
They are the biggest shareholder in Nintendo, Savvy Games, Scopely.
I mean, they just keep buying games.
What are your thoughts here on this deal happening right now?
I really like it.
Let me give you the bull case, and then let me give you what the bear case would have to believe.
The thing to remember is that video games is the anchor pillar of usage across the entire internet.
Last week at our poker game, we had Matt Bromberg
join in just for dinner, who's the CEO of Unity and Alex Blum, who's the COO of Unity.
And one of the stats that they shared with us at dinner was it's about 3 billion DAO play games, which is just an inclusive,
incredible stat.
So in many ways, it's much bigger than social networking and social media or as big.
And in that, EA is sort of this 800-pound gorilla.
But I think the problem is, is that they've always been these gatekeepers.
And I think that there's a risk and a chance that these gatekeepers get eroded away.
Specifically, who I'm talking about are folks like Microsoft and Xbox.
And at the point that this company is going private, there's some really interesting things that are happening.
So Xbox, I think the day after the EA deal got announced, decided to hike prices 50% to their subscription service.
And what happened over the subsequent few days is that so many people tried to cancel that the site went down.
So what are you seeing happening?
You have distribution gatekeepers trying to raise prices and take share.
And then you have the original IP owners who have not had a well-funded way of fighting back in a category that is basically as important and frankly more important than social media.
So I think if you take an asset like this private, it allows you to take your time to clean up the OpEx model, right?
Figure out who does what, be able to use the best of all these next-gen tools,
and then be able to find ways of finding distribution outside the scope of Xbox and PlayStation so that you can take more of your share.
If you do those things, this is a multi-hundred billion dollar asset.
And in that, I think it could be just an enormous win.
So, I think it's very smart.
What's the bear case?
I think the bear case is extending a theme that I've talked about here a few times, which is I think the value of patents and by extension, IP and copyrights are going to go away.
And in that,
there's going to be a spectrum where certain content IP holders lose and other ones win.
I think gaming is on the winning side, to be honest.
And I think content studios in general, like traditional traditional content, the Disney's, the Hulus, the Netflixes, are on the losing side.
But the bare case
would be that these tool chains
allow the number of games to be built to increase by two, three, four orders of magnitude, and that they are distributed by other places like the social media sites.
I just think that that's a pretty low probability.
So on balance, I think that Jared and Egon did a killer deal.
I really like it.
And for people who don't know, Unity makes the 3D software that people build games in.
It's a public company, 16 billion, also backed by Roloff and Sequoia back in the day, incredible company.
Freberg, what are your thoughts on the gaming industry versus, say, social media versus traditional media?
We're seeing massive amounts of money being put into each of these, but this is time.
And for this next generation, let's say millennials and younger, we're seeing a big mix.
Obviously, they don't have cable TV, so that's been plummeting, but they do play games.
They do like the YouTube, the TikTok, et cetera, and they do love social media.
What's the future here as you see it?
One way to answer that question is to think about how people spend their time.
Do you spend more minutes on social media?
or on traditional media or playing games.
And how is that trending?
But importantly,
which of those will accrue more benefit and, as a result, drive more hours spent from AI?
Is AI going to create more social media engagement?
Is AI going to create more traditional media engagement?
Or is AI going to create more video game engagement?
And I think that one way to kind of think about this thesis is that AI is going to ultimately accrue to video game entertainment far more than social media entertainment or traditional.
Why is that?
Why?
Explain to me.
Because I think you can create dynamic, more engaging experiences that will benefit from kind of a back and forth sort of relationship than you can with traditional content or with social media.
And what we see now in a lot of gaming systems that didn't exist, call it 12 years ago, is AI-driven players embedded in the games that act and feel a lot more like real human engagement that is very hard to kind of mimic.
from traditional programming methods that were used in gaming.
And so that makes a big difference.
Like, for example, if you're playing Fortnite, I don't know if you guys play Fortnite or have played Fortnite, but if you're a noob in Fortnite, like you're early player in Fortnite, you're mostly playing, even though you go online and play against what are supposed to be kind of other players, you're mostly playing against AI because what they do is they tune the AI to be easier to beat so that you can slowly develop your skills.
Because what was happening early was they were seeing a high degree of churn in Fortnite because kids would go on and play for the first time and they'd get paired up with kids that were better than them.
And so they would never win and they would get frustrated and and they would quit the game and stop.
So the churn rate was high.
So AI unlocked higher engagement and higher retention on the Fortnite platform.
And I think we're seeing that in a lot of different gaming platforms now.
So AI can be used, for example, to maximally increase time, engagement, satisfaction, happiness.
I think the Saudis saw this.
And if they're trying to diversify away from their oil holdings, entertainment and how people spend their free time, which by the way, I think is a general macro bet that everyone should consider making.
Because if you believe in AI and you believe in the improvements in productivity, generally speaking, people in the industrialized world will generally have more free time on their hands and be able to support themselves with the deflationary effects of AI over time.
So if there's more time on people's hands, the general market for entertainment is growing.
And if the general market for entertainment is growing, gaming is the future of entertainment and the future of gaming is AI.
Now, the Saudis owned 10% of this prior, this company prior to the deal.
And I don't know if you you guys have tracked the investments they've made, but they've been extremely aggressive with gaming.
So they have this like investment division called Savvy Games.
And within Savvy Games, they bought Scopely for 4.9 billion in 2023.
And then earlier this year, they spent 3.5 billion to buy Niantic, the company that makes Pokemon Go.
And then they also own 4% of Nintendo.
They own 6% of Take-Two.
They own a sizable percent of Activision Blizzard.
So they've put quite a bit of capital in small investments in in other gaming platforms.
They own a few gaming platforms.
So this is clearly like a big thesis and a big investment that they see as the future of entertainment over time.
Jared's firm Affinity is going to own about 5% of the company post-transaction.
The Saudis are going to be the majority owners.
So I think that this is going to end up being the next big platform play for them.
And it allows them.
to make the important long-term investment in furthering the transition to AI and not have to worry about quarter-to-quarter earnings, but really making a 10-year bet.
And they do talk a lot about this 2030 vision.
And if you look at across those three categories, we've been discussing here, video game usage, about 60% of U.S.
adults do it every week.
Social media, about 75% of Americans use it every week.
And streaming, traditional media, the Netflixes, Disney Pluses of the world, that's still 83%.
So these are the three buckets of people's time, books and going to the movies.
Those are obviously the big losers.
By the way, you know that's the mix.
The market was totally getting this wrong because the TikTok of the deal is super interesting.
When they were looking for the debt financing, it was about 36 billion of equity, 20 billion of debt.
They called Jamie Diamond, and Jamie basically ripped the 20 billion in on the same day.
Just because I think he also could underwrite this pretty fast.
I mean, some of the biggest deals are frankly so obvious that it just takes the courage to put it together.
And then everybody's like, oh, this just makes so much sense.
And then Andrew Wilson, who's the CEO, is going to stay on.
He's a great guy.
Super, super compelling.
It's worth talking a little bit about the impact, I think, of private equity.
If you spend any time in the region, I'm going to be in Saudi and Dubai in the first week of November, doing my founding university.
And I've been out there twice a year, maybe for the last three years.
They will tell you, whether you're in Doha, Abu Dhabi, Riyadh, we've got six or seven industries we really care about.
Technology is at the top of the list.
Private equity is at the top of the the list.
Live entertainment and sports at the top of the list.
And then actually, hospitality also at the top of the list.
Real estate, building new places for people to go.
And if you look at private equity, pull up that chart I had there.
This is just stunning how big this industry is getting.
You know, $5 trillion is what we're up to here.
And it just keeps growing.
I think private equity is totally screwed.
I don't think Silver Lake or Affinity or this deal
are screwed, but I think private equity in general is totally hosed.
All right.
Well, it's gotten huge just since 2015 and tripling in size.
So why is this, I guess, my question for the gentleman here and for the audience, why is private equity becoming so large?
And what impact does that have on society if people can't put EA into their retirement account, they can't put Stripe into their retirement account?
If we take all the great companies and we start to privatize them, SpaceX looks like it never goes public.
What impact does that have on people's retirement accounts?
Okay, look, I think the history of this is important.
There was a long-standing belief that the best way to generate the best risk-adjusted return, what does that mean?
That means to manage through periods where the stock markets go down and to manage through periods of volatility.
The best way to do that was to have what's called a 60-40 allocation, 60% to bonds and 40% to equities.
Over many years, especially when we artificially suppressed rates at zero through Obama, a lot of people started to move their allocations away from 60-40 and they started to make more and more investments further out on the risk curve.
The biggest beneficiaries of that were venture capital, private equity, and hedge funds.
The thing with private equity is that because rates were zero, they had an infinite amount of borrowing capacity, had very little downside to them.
And so they were able to manufacture returns much faster than venture capital and hedge funds could.
So as a result, you had an initial group of people that were defining the asset class, making a ton of money.
And then you had all these fast followers that said, well, if they're doing it, I can do it too.
So far, so good.
But then always what happens is then you have this flood of laggards that just flood the zone.
And it's these laggards that make it very difficult to generate returns because they start overpaying for assets.
They start mismanaging and undermanaging the assets that they do own.
And so where we are is that private equity has seen a very consistent way of returning money to help improve that 60-40 portfolio.
As a result, they got a lot of money, but then that created a lot of competition.
And so that's why you see this hockey stick graph, Jason.
And when you see that kind of graph,
it doesn't matter what asset class it is, the returns go to zero.
And so we've seen this in venture capital, we've seen this in hedge funds, and we're now going to see this in private equity.
Too much money going in, to be clear what you're saying, Shma.
Means you kind of
exit, right?
There's no returns.
And so, again, I've said in any of these alternative asset classes, there's only one thing you should always ask if you had to have one critical question:
what are your distributions?
Don't show me your IRR.
What is your DPI?
The distributions on your paid-in capital.
And if the answer is zero,
then it is a very challenged asset class.
And what I will tell you in private equity is that over the last four or five years, distributions have been few and far between.
So I think what's going to happen is that the money is going to come out of private equity and it's going to get concentrated into the few companies that know what they're doing, of which Silverlake Silverlake has generated over the last 15, 20 years,
tens and tens of billions of dollars of distributions.
They are just an exceptionally well-run organization.
They've done these huge buyout deals successfully before.
So we need to go through that in PE.
Where does the money go?
The money's already leaked into private credit, which is the next big bubble that's building.
It looks like this chart that you just showed,
which is loaning businesses money.
You know, it's super interesting because you make such a good point.
What we're seeing in private equity is these continuation funds.
Now continuation funds are coming to Moth to Venture.
So I've been getting pitched on these continuation funds.
We're like, hey, take all your assets, sell it to a new group of people, and then reset the clock.
And then there's never an exit.
The good news is, I will say the last year, we've seen a lot more activity for shares of our companies that are still private.
So the secondary market, Freeberg, is coming back in a major way.
But I do get worried about these continuation funds because now you're just moving an asset from one class to the other, and we need to have a functioning IPO market.
How functioning is the IPO market today, would we say?
It's completely dysfunctional.
How dysfunctional is the IPO market?
Let me say it another way.
And
how do we correct that?
And if this leads into your new SPAC, look, there are three ways to go public.
There's the traditional way IPO,
there's the direct listing, and then there's the reverse merger or the SPAC.
Up until I floated IPO A in 2018, I think it was,
the first way was really the only way.
I was involved in two direct listings, Slack and Coinbase.
And in both of those, what I learned is that, you know, it has the same vagaries as the traditional IPO.
So in the traditional IPO, you go to a bank, they underwrite you, they act as a gatekeeper, and they take 6%, 7%, 8% fees as a result, and then they allocate what is essentially underpriced stock to their best customers.
Then you see a one-day pop, maybe a two or three-day pop.
All of those customers tend to unload, and then the stock tends to drift down.
So the IPO is expensive, and it typically is mispriced.
The direct listing,
you have a different dynamic, which is the first trade is always the highest trade, and then it just goes straight down.
That happened with Slack and it happened with Coinbase.
So
I would be in that group as well.
Yeah, with Slack, I remember like I was like offside a billion dollars, and I was like, well, I'm never letting this happen again.
And so when I had the Coinbase thing, I sold it the first day.
And I texted Brian, I said, this is not a directional indication of your company.
It's the dynamics of the direct listing because I learned it the hard way that the time to sell is on day one.
So, where does the SPAC come in?
You know, especially now in version two,
version two being the thing that I have been tinkering and refining with and
I'm trying to push in in this new version.
I think that it's creating an incredibly competitive vehicle where you can have a ton of money go into these private companies, take them public at a very, very low cost of capital.
And I think that that's
should be very enticing.
So you closed your financing.
Can you just tell us what the capital raise was like as you went out and met with folks?
what did you hear yes you know nick maybe you can find it you know that image of the raptor engines
yes super complex to being elegantly simple yeah nick can you can you maybe just throw that up what i would say is like spack 1.0 of which i was you know right in the front of the parade had a bunch of misfires and it was complicated but it worked there were some hot fires that worked but then there were some clear misfires and the whole point was to prove that you could create a competitive alternative to the ipo the thing that i'm the most proud of, quite honestly, is
for all intents and purposes, I started
a normalization of this vehicle that's now raised more than $150, $200 billion for American companies.
I am very proud of that.
That's an important thing for the American capital markets.
I think what
we did in American exceptionalism is Raptor 2.
It's not yet perfect.
But I do think it tries to improve on the things that I noticed was not working in Raptor 1.
And in that is a lot of the compensation and incentives.
And so when I showed that to investors, they were quite excited.
I think that they
want a competitive IPO market that brings many, many American businesses to the public market so that they can be owned by everybody, the transparency they like.
And the fact that the incentives are such now where there's absolutely no compensation unless this thing really works.
And historically, they received warrants in the company, typically with a strike price of $11.50.
So it's 15% above the issue price of the stock.
And founder shares that were basically.
And there was founder shares.
But like, did you have a reaction from them saying, hey, we want some warrants?
We need a little extra kicker here.
Like there's some sort of desire for that.
No, in fact, it was the opposite.
I think that the institutional investors and, you know, my investors in this, 98.7 of the capital was allocated to these guys are the best of the best.
You who they are.
So
they're every single blue chip A-plus institutional investor.
And what they wanted was great companies.
They want great companies to be public.
And the reason is the thing that, Freeberg, I think you mentioned this before.
When a good company gets public, the amount of money that they can raise in the publics and then the amount of growth that they have in the publics far outclasses what they'll ever do as a private company.
And so they want the simplest and cheapest way of great businesses to get out.
Jamal, do you think that the transaction, when you find a merger partner, the traditional SPAC has been announced as a merger concurrent with a pipe being done where new investors are underwriting the valuation of the deal and saying, we like this company at this price because we are now going to write money in in the form of a pipe?
And historically, the pipe was for common shares.
So it kind of was like, this is a good price, and everyone felt good about it.
Number one, do you anticipate that there will still be a pipe being done and concurrent with the merger in this transaction?
And then number two is, do you think it'll look like a common pipe?
Because after the SPAC frenzy died down, in order to get deals done, the pipes started to get done with convertible preferred securities.
So they were senior to common and they almost were like debt.
How do you think this is going to play out?
Because a clean deal has not happened in quite some time where a SPAC has announced a merger and simply raised money via common in the form of a pipe.
It's a great question.
I think it comes down to the underlying asset, but there are some incredible companies that are private that if they go public,
will be able to demand
common pipe capital.
I think that the future, maybe just prognosticating and guessing, what does Raptor 3 look like in the SPAC?
I think the Raptor 3 will look like where somebody, a sponsor like me, rolls everything up into one thing so that it's already pre-wired from the beginning, where I'll just speak to a billion, $2 billion, $3 billion, whatever it is, flexible capital that can come in as comments so that it's a totally pre-baked IPO at a very fair price.
I think that that's what the Raptor 3 version of a SPAC will look like.
So more capital, and then they put their full trust and faith in the sponsor to run the deal.
Well, no, no part of the corporate.
Meaning, then there's no conversion risk, that all the money comes over right from the beginning.
It comes over.
Right.
And so then you have to fully commit in.
You set your compensation to be a bit Elon-like in terms of your compensation as the sponsor comes, if I read it correctly, Chamof, when it hits certain milestones in terms of share price.
Yeah, nothing can be earned unless the stock is up 50%.
And then there's a tranche at 50%.
Then when the stock is up 75%, there's another tranche.
And when the stock is 50% of the state.
And there's no founder warrants in the deal?
Or there are founder warrants?
There's no founder warrants.
Nothing.
I think this is great.
I was asked, by the way, by the way,
the reason why this is important is all of those things that you guys mentioned increases the cost of capital to the founder and to the private company board and to the employees.
All that's unnecessary dilution.
So now we take it all off the table.
Yeah, smart.
The thing I, you know, the observation I had at the time, not just for your collection of SPACs in the 1.0 era, but just all of them in general.
And I tried to explain this to our syndicate members and investors, as well as the CEOs, because a lot of my CEOs were like, Should we do a SPAC?
And one of them, Desktop Metal, did.
This felt like venture investing.
And, you know, if you look at Opendoor, Virgin Galactic, Joby, which I don't think was one of yours, SoFi, MP Materials, all of these companies,
you have to look at it.
If it is a venture-type investment, 80% of venture goes to zero, 20% pays up for the other 80%.
I think people were looking at this like it was Netflix, and they were not thinking of these companies and the stages they were at.
Well, that's just my question.
Yeah, and then I'll drop it to a question because SoFi and MP Materials, they did extraordinary.
So in this class of companies you're going to be taking out, is it going to be the same early stage or are you thinking more robust, more predictable revenue?
Let's call it
resilient revenue, maybe rugged revenue?
I think it's the latter, but I think it's also important to note that this time around, I've tried to really minimize retail exposure to this i don't think that retail is well suited right now to have these things and what my my honest advice is avoid
maybe not all spacks but definitely my spack just avoid it i think that there is more than enough liquidity on the institutional side for us to do an interesting deal but it fits in our portfolio and our construction which is a very different risk model and so i would hate that you know people are out on the risk curve without really understanding the risks because jason you can't predict the the market.
You don't know where these things are going to go.
Yeah.
I mean, desktop metal, 3D printing, this is like a very cutting edge, nascent technology.
Company should have stayed private a couple more years or people investing it need to understand.
You're now acting like a venture capitalist, which means the return profile and how the portfolio management works is distinctly different than doing.
Netflix and NVIDIA and whatever other publicly traded companies.
I would just say do not invest in these things.
Don't at least, you know, just I think you just inspired people to do it.
I know that's not your intent, but when you say don't do it, well, that's stupid.
I'm being very honest.
Don't do it.
No, no, I know.
Don't buy SPACs unless it's like less than 1% of your portfolio.
Would be my advice.
Before we move on, can I just make one comment?
And I'd like your guys
know about the private equity stuff because Chamoff made a comment that private equity is fake.
But I think one of the things to take note of in this take private of EA, and we talked about it as the theme of AI empowering EA to kind of transform the business.
And And Jared's brother, Josh, has at Thrive been executing a roll-up of CPA accounting firms that he's been applying AI to to reinvent that business.
Oh, is he really?
Yeah.
Oh, I should talk to him because we have an investment in a company called taxgpt.com that is basically like co-pilots with AI for accountants that's doing spectacular.
So what he's done is he's bought these kind of traditional accounting firms at some multiple of EBITDA, and then he can transform the business with AI and really create a new opportunity.
And I've said, like, I think this is one of those few moments in history where there really is an opportunity to beat the market and make money in the public markets if you can be thoughtful and selective about the companies that stand to benefit from an AI execution strategy.
Because in all of these traditional kind of markets where you have competition, everything's commoditized and the market is mature, it's very hard for any of these players to differentiate.
product, service, and obviously, you know, unit economics.
But with AI, it's completely transformative and has transformative potential in nearly every industry.
So, as a public market investor, if you can identify those opportunities, select them where the management team has the right leadership in place to execute against this, you could make real money.
The problem is, most of these companies are not led by folks that understand AI or software first.
And so, I think there's an opportunity for more buyouts.
They're not going to be of the $55 billion scale.
It's worse than that.
In what sense?
So, we
at 8090 have done the dance with all the big major private equity firms.
And here's how it goes.
It always goes the same way.
The partners love it because they're looking at minimal distributions,
companies that are like good, but not great in many cases.
And they want to see improvements to EBITDA and performance so that they can either sell them or move them out.
And you're sorry,
you've looked at this with their portfolio.
All of them.
Yeah.
All of them.
With their existing portfolio companies.
So the GPs are like, this is genius.
We should do it.
Then they're like, here's a handful of companies to go talk to.
And I'll be really honest with you.
What you find in most private equity portfolios are BNC companies run by CND folks.
And so the ability for them to go and embrace this is basically next to none.
So if I look at my customer distribution and concentration at 80-90, okay,
Run rating into nine figures already, working on a $300, $400 million deal.
Okay.
Not a single dollar comes from a private equity firm, although we spent initially a lot of time trying to sell it, trying to sell our software factory and trying to sell work into them.
It's really hard.
And it's what you said before, Friedberg, which is the people incentives at these businesses are misaligned to the AI outcome.
And you can't fire these people.
And I don't think the right answer is to fire them.
So I don't know what the right answer is.
This is why I think private equity is very challenging.
Do you think there's a power loss situation where perhaps a handful of investors in the public markets and perhaps a handful of investors in the private markets can identify and then put the right people in place and execute against these strategies, like Josh is trying to do with his family?
Well, I think Josh is smart.
So I think Josh will figure it out no matter what.
What I'm saying is, if I can show you
20, 30 customers, a ton of revenue, all these white white papers that show upside, and I still can't get it done inside one of these companies.
I think it's not us, it's them.
Right.
So it's not inherent in traditional private equity to do this either, which maybe begs the question, is there a new kind of private equity that can execute this?
Maybe that's an opportunity, like Josh is showing, right?
Like he's a venture investor that's executing a private equity strategy.
And maybe that becomes the play.
I think if this works well, two of our biggest customers are individual DECA billionaires who own businesses and they're like, you're doing this.
So, to the extent that Josh looks more like that, which is an owner of 100% of the business, where it's like, you're going to do it,
then I think it can work.
It's like the Saudis.
I think the owner-operated model is the only way the AI transformation really works.
And then at the other end of the spectrum, it's the public's market CEO who realizes that they have to do something real because they'll otherwise lose their job or they'll be disrupted.
Those are the two cohorts that I feel today
are on their forward foot.
Everybody else is like sticking their head in the sand.
Just on the EA front, I forgot to ask you, Sir Demis, my Greek brother, didn't he show a...
It's all always the Greeks who get these things done.
Yeah.
Didn't he show like the 3D engine that would make like infinite games?
Yeah, so it's not actually a 3D engine.
It's a class of these AI models that can render what the experience is, looks like and feels like a 3D world, but it doesn't have an underlying kind of traditional object rendering engine.
It doesn't have a traditional 3D physics engine.
So it's a new way of experiencing these kind of world interaction systems.
And there's several startups.
I think Fei Fei is her name, the Stanford AI professor.
And she has one of these.
It's a virtual worlds company that has the same principle.
I asked Bromberg and Alex about exactly this at dinner.
What was their take?
He said, it's just really, really hard to get these things to actually be legitimate engines at the scale of what Unity offers for the quality of game that needs to be made for it to work.
The interim step is going to be the assets in it are created by AI.
That's what I've seen a lot of startups doing.
So you want to make a character, you know, you drop in characters.
And that can be done in real time.
I think you're right.
The whole thing is Unify and Unity as the rendering engine, and the AI sits on top.
And the AI basically can render objects, can render concepts, can render structure, can render the direction that you as an engineer would typically provide to the Unity or Unified 3D engine.
And that's going to unlock not just in video games, but also in film 100%.
Can I tell you an example?
Yesterday, there was,
you know, in our group chat, a bunch of people sent around the Sora, the Slop app.
Yeah.
And I downloaded it just to play with Sora yesterday.
And the first video that came up was exactly this.
It was like an ATP tennis match
where it was a guy's face, the guy, like imagine you, and then playing against like a Federer.
And then I thought, well, what if he was playing against his friend and that was the actual video game?
To your point, you get away from all this IP licensing, gatekeeping stuff, and you can just get to good games faster, good content faster.
I think that's adaptive in terms of the competition.
So you're not playing somebody who's going to just dominate you.
It just gets 5% better every time you play it.
You'll get 4% better.
And it'll just make it perfectly challenging.
So you don't quit quit and you'll learn as you go.
It's really going to be an interesting thing.
The same will exist in like content, Jake Howe.
You'll make shorts and films and then the ones that have the most engagement, the AI
prompting system will get better and better.
And ultimately, it will yield like, you know, bits of content that people are going to be able to do.
You're going to see that happening with Star Wars or Marvel.
If all of a sudden Silver Surfer is an interesting character to you or Ashoka Tano is interesting to you, it'll sort of make that world or enhance that character and tell you more of their backstory.
And that is interesting as a how you can sit in your seat and like make fun of me, call me a nerd, and you actually know the name of this Star Wars character.
I don't even know.
She's a very important character.
Ashoka is Anakin Skywalker's Padawan.
She's a very important character.
If you watch the Clone Wars, you would know this.
The animated series that threads through the
Clone Wars.
Actually,
oh, look who dropped in.
Oh, David Sachs is here.
Did you get out of your, were you in a skiff or something?
What's going on, czar?
I was in some meetings, but actually, no, I was just buying some domain names.
Oh, you are?
Did you get mahalo.com?
I got mahalo for the bargain price of $1 million.
Well, that's what it's worth.
Go to mahalo.com.
I'm selling it for a million.
I mean, it's
in the dictionary.
Yeah, I have some old assets.
Somebody else should use them.
I just, I have begin.com and I'm going to be working on that in partnership, probably with one of the largest.
I might give you an equity squat for that.
I'll give you a mahalo is the second most important name in the second most important word after aloha in the um hawaiian language.
I'm surprised Benny Off hasn't tried to ask you for a while.
You know, I should do.
I was just texting with Benny Off.
Give it to him as a gift, dude.
He's a great guy.
Just give it to him.
I will give him the, I will give Benny Off mahalo.com if he gives me four weeks in one of his Hawaii resorts per year,
he would do that over the next 20.
Oh my gosh.
Imagine that for 80 weeks.
Oh my God, as a house.
Takeout for 80 weeks as a house guest.
He could be there.
He could be there.
It doesn't matter.
I'll give him the money so he buys it.
Don't get it.
Donate it to his nonprofit foundation.
Then you can take a tax write-off.
Look at everybody's when I have something to sell, the guy with the lowest net worth on the program, when I'm trying to pay off my jet, you guys all have criticism.
How come I can't wet my beak?
I got jet boys.
Let me ask you a serious question.
So you had investors in Mahalo, right?
Yes.
And I said, this is their domain.
This is their domain.
This will go to them.
Oh, so it will.
Oh, okay.
It will.
It will go to those investors.
You're paying off liquidation preference.
Correct.
Oh, okay.
It's just sitting there.
So now instead of losing 100%, I'll lose 99.3%.
Something like that.
It's just startups are hard, folks.
But I have thebegin.com, and I've been talking to folks.
Mahalo was originally a human-powered search engine like Wikipedia, which we're about to get to.
And my concept was to do comprehensive search like Naver.com or Daom in Korea had seen those services.
And it turned out to be exactly like Perplexity, but at the time we tested machine learning, which is whatever we called AI back then, and it just didn't work.
So we were trying to hand-roll search results and then back them up with
computer-generated ones, algorithmically generated ones.
But the tech wasn't there now.
But I want to do something again with begin.com.
I'm really excited about that domain name.
All right, listen.
We brought up Slop.
Let's get into it.
Two Slop Slop apps in a fortnight here.
No pun intended.
Zuck and Sammy the Bull have both released.
What a deep poll.
Sammy the Bull Gravano.
There it is.
And here's a look at Sora.
It's objectively extremely impressive.
Here's Sam Altman.
People don't know this early in his career when he was starting OpenAI, didn't have the money from Elon.
And here's Sam Altman stealing an H-100.
Here's Sam Altman also.
This is when he was storming the Capitol on January 6th.
Here he is when he was working at Google.
Yeah, lots of, but it's really good.
And they are basically taking a ton of risk and solving some problems with IP.
As we all know, the IP outputs is where people think you're going to have to be really thoughtful or get a bunch of lawsuits.
On this app, you can opt in and make your persona, like Sam did, available to everybody to use.
So that whole whole concept of notable persons allowing their image to be used, you opt into that.
And that's pretty clever.
So, you can let your and you can make it so your friends can, you know, basically make videos of you, but nobody else can.
It's a thoughtful way of doing it.
However, very controversially, this thing had everybody's IP in it.
And you have to opt out if you don't want your IP used.
That's going to get them another whole collection of lawsuits to go with the New York Times and Ziff Davis ones.
And there have obviously been a bunch of settlements now.
Anthropic settling their book thing for $1.5 billion.
So anybody play with these tools yet?
And what do you think, folks?
And what's the point of these?
Do we think this is like a TikTok competitor
to myself?
I mean, do you think it's just
closer to the store to training data?
What do you think?
The closest thing is a TikTok competitor, but I use it.
I thought it was okay.
But again, the thing that I keep in mind whenever I try these apps for the first time is
today is the worst it'll ever be.
Sure.
It only gets better from here.
And so if you look at the starting point,
it won't take but a year where this thing, I think, or maybe two years, where this thing is legitimately excellent.
It has to get the scripting right.
It has to get the prompting right.
It has to be a little bit easier for you to use.
There was a bunch of prompts that I used that were rejected by
the IP, right?
Well, it just said use me.
But I couldn't validate that I was me.
And so you have to take a picture of yourself.
It's a little clunky, the app right now, but you're right.
It's going to get better in each version.
The one by
Zuckerberg is called Vibes.
I was looking at these sacks and I don't know that this is intended to be like the next great social media app as much as it's a data play to get folks to train data.
When you see them, what are you, any thoughts on them other than interesting?
I haven't played with it yet.
So it's hard for me to say.
Freeberg, you got any thoughts on it?
Just
no, I don't have like thoughts.
I think, you know, we're kind of early innings.
I do think there's like new categories of media that none of us are really considering today.
Like traditional media, as I've mentioned in the past, is like centrally produced and then broadly consumed.
And I think that there's models of media that are going to emerge that are going to create new business categories or new business models and also new media categories that are all about kind of distributed production and not necessarily like central production, distributed consumption.
So that kind of changes things quite a bit.
And I think maybe this is going to start to open that door a bit.
One of the things I, because I thought about this and I mentioned this in the past, where I'm like, everyone's going to make their own movie, their own video game, their own music.
But there is this notion of like shared cultural context.
Like everyone wants to talk about, you know, how did the 49ers do this weekend?
Or did you guys see that show Adolescence?
Did you guys like, we want to have a conversation about some shared stories.
That's the basis of kind of societal interaction and memetics.
So I think like there are elements of this being the beginning of the enabling tools, but I don't think we've actually seen what's going to happen, which is how do you take one story and then create a distributed way of consuming that story where everyone experiences it and consumes it differently.
So I do think like this notion, it's like, hey, everyone's making fun of Sam or does some like maybe there's some cultural context about Sam Altman that we all share.
And then we're all like.
engaging with Sam Altman in different ways, you know?
So, so I think like there's, we're very early and we don't yet know kind of how it's all going to play out, but I think that's really critical to
interest.
It is something is lost because we used to all talk about the latest Tarantino movie or the latest, you know, Sopranos episode.
We don't do it anymore.
And I, I have share stuff.
We do talk about tweets and stuff.
And, you know, there's other forms of
that we share.
But it's, it's not like it used to be where 30, 40 million people would see Raiders of the Lost Ark and it would be the discussion of the summer or whatever it is.
And so I literally bought 20 tickets to the new Paul Thomas Anderson, one battle after another, just so I could have a conversation with 20 friends about the new PTA.
And so people really are longing for this shared experience.
Paul Thomas Anderson, he did the master, there will be blood.
There will be blood.
One of the great.
He's top five director of all time, but I know you don't care about culture.
But is he like, is he like Michael Bay?
No, the opposite of him, actually.
Michael Bay makes things that go boom.
Paul Thomas Anderson
makes things that make you go.
Michael Bay is super cool.
Fun to hang out with.
Fun to party with.
Right.
Okay.
Well, way to bring it back to you.
You dropped a name here.
Jamatha is.
I don't know Paul Thomas Anderson, but it was a heck of a film.
As Sachs, Sachs is actually very cultured when it comes to cinema.
Did you see it yet, Sachs?
I have not seen it yet.
No, it's
of the moment.
And it's quite a road exactly.
I just heard it was anti-conservative.
So it's a kind of anti-wing take.
No, it kind of mocks the left and the right.
It's kind of mocking both extremes.
You love it.
I think you very much appreciate it.
All right.
I'll check it out.
Yeah, I would check it out.
Hey, I have an idea.
Why don't we find a topic that's interesting to talk about?
Yeah, okay, great.
Yeah, well, that's a well, if you contributed to the docket or showed up on time, maybe we could do that.
Unbelievable.
Just so you know, the inner workings right now, there's a little resentment in the group because one of us decides to change the time of the pod for four weeks in a row and then show up half an hour late.
I won't say which person that is, Sax.
Sax.
But here's an interesting topic: some red meat for you.
DeepSeek, the Chinese LLM, just dropped their latest model, 3.2 EXP.
It's faster, it's cheaper, and it has a new feature called DSA,
DeepSeek Sparse Attention, which makes it faster to do training and inference at larger tasks.
The key takeaway is it can reduce API costs by up to 50%.
The new model charges 28 cents per million inputs, 42 cents per million outputs.
Claude, which is a leading model from Anthropic that a lot of developers use, a lot of startups use, is like $3.15, so 10 times, 35 times more expensive.
Obviously, people are cutting their prices pretty quick.
But Sachs, this is your wheelhouse as our czar of crypto and AI for the United States of America.
What are your thoughts here on the continued execution of the Chinese government?
with deep seat.
Well, I want you to hear Freber's thoughts on this because he was paying attention to this, weren't you?
Yeah, I I mean, I think there's a total re-architecture underway, and we're at the earlier stages of cost per token in terms of dollar and energy.
My understanding is there's actually a lot of work going on with U.S.
labs right now in a similar kind of track that's going to result in similar results.
Maybe they're a little bit ahead of the curve, but we should really pay attention to the curve.
I think, you know, what do the models say in terms of energy demand, in terms of cost per token, if these architectural changes really do drive down 10x, 100x, 1,000x, 10,000x
over the coming months.
And this is open source.
So just so everybody understands, it's available on AWS.
It's available on GCP.
At least 3.1 is.
I don't know if 3.2 is available there now, but I'm hearing from a lot of startups.
I don't know if you're hearing this in the field, Shamoff, that they're testing it and playing with it, in some cases using it because it's so much cheaper.
Are you seeing that?
We are a top 20 consumer of bedrock.
So let me tell you what it looks like on the ground.
We redirected a ton of our workloads to Kimik2 on Grok
because it was really way more performant and frankly, just a ton cheaper than
OpenAI and Anthropic.
The problem is that when we use our coding tools, they route through Anthropic, which is fine because Anthropic is excellent, but it's really expensive.
The difficulty that you have is that when you have all this leapfrogging, it's not easy to all of a sudden just like, you know,
decide to pass all of these prompts to different LLMs because they need to be fine-tuned and engineered to kind of work in one system.
And so, like, the things that we do to perfect code gen or to perfect backpropagation on Kimi or on Anthropic,
you can't just hot swap it to deep speed.
All of a sudden, it comes out and it's that much cheaper.
It takes some weeks, it takes some months.
So,
it's a complicated dance, and we're always struggling as a consumer.
What do we do?
Do we just make the change and go through the pain?
Do we wait on the assumption that these other models will catch up?
So
we are making tools now.
And by the way, I can't do my channel.
It's making it easier to switch between them.
No, and like, you know, this weekend, a different company with a huge model.
came to us and gave us the preview of their next-gen model.
Okay, so, and it's incredible.
But then when I sit on Monday morning with my team and I'm like, okay, what do we do?
We don't know what to do.
Do we cut it?
Do we move over and say, great, we'll refactor all these workloads to run on this new model?
It's a really hard problem and it's getting worse the more complicated tasks that we undertake.
Okay, and just for people who don't know, Kimmy is made by Moonshot AI.
That's another Chinese startup in the space.
Zach's your thoughts.
Well, I think this is actually a really interesting topic, this topic of open source.
I'm a big fan of open source software because it's a check on the power of big tech in a way.
What we've seen in the past and the history of technology is that these major categories end up getting dominated by one or two big tech companies and they have all the power and control.
And open source provides an alternate path, right?
Because the community of open source developers just puts things out there and then you can take it and run it on your own hardware.
And you're not dependent, right?
It's a path to sort of software freedom, if you will.
So so far so good.
I think the thing that is now tricky about this is that all the leading open source models are from China these days.
China has made a really big push on open source.
Obviously DeepSeek is an open source Chinese model.
That was the first big one.
Kimi is one.
Quen from Alibaba.
And so I think that if you want the US to win the AI race, then we're all kind of of two minds about this.
On the one hand, it's good that there are open source alternatives to the closed source proprietary models.
On the other hand, they're all coming from China.
Now,
there were some American efforts that have been important.
So Meta, most notably, has invested billions and billions of dollars in Llama.
But the release of Llama 4, I think, was considered disappointing by a lot of people.
And now there's statements by Meta that they might be backing away from open source and just going proprietary.
OpenAI released an open source model, but it's nowhere near their frontier.
And there are some startups that are trying.
So there's one called Reflection that looks promising, is developing an open source American model.
But so far, this is maybe the one area in AI where the US is behind China's sort of open source models.
I'd say every other part of the stack, closed models, chip design, chip manufacturing, semiconductor manufacturing equipment, every other part of the stack, even data centers, I would say we're ahead.
But this one area of open source is a little bit concerning.
Interestingly, Sachs, the
two things of note is OpenAI.
The open was originally that they were supposed to do open source.
So that's kind of hilarious.
But the second is that Apple, which is the furthest behind of everybody, they have a really interesting open source model.
So when you're behind, like Apple is or the Chinese were, you're open.
You do open source.
And when you're ahead, like OpenAI became with ChatGPT, you close it down.
But
Open Elm, Open ELM, yeah.
Efficient language models from Apple.
Keep an eye on that one.
Can I tell you what's going to make this open source, closed source battle even worse?
Because effectively what this is is the U.S.
versus China.
The U.S.
is closed and China is open, at least at the scaled models that work.
But that doesn't have to be the case, right?
Because we could release open models too.
No, no, no, you're right.
I'm just saying today, if you look at the conditions on the field, the closed source, highly performant models are American.
The open source, highly performant models are Chinese.
And you would say, okay, well, what is the next downstream thing?
It's what Freeberg mentioned, which is the energy and the cost of generating these output tokens.
And I talked to somebody yesterday who runs a huge energy business.
And I have to tell you, it's not in a good place, meaning you saw, I think, this week where the residents of Indianapolis were able to reject or get their city to reject a billion-dollar data center that Google was going to build near Indianapolis, largely because of concerns of price inflation around electricity.
And what this energy CEO told me is, look, the next five years are baked.
And if we don't find some compelling solves,
and I'll tell you what the two ideas were, but if we don't find some compelling solves, electricity rates will double in the next five years.
Now, if you think about how then consumers will view the use of AI.
And then if you think about companies like us and
trying to use the cheapest version so that we are minimally impacting the downstream cost of these things, because it will become an energy problem, this is a very complicated thing.
Now, his idea, and it's a huge PR crisis, because if you want to take big tech, which is already viewed negatively and make their perception even worse, if you start to finger point to them and say, these guys are the reason my electricity costs have doubled in the last five years, that is no bueno for them.
And they need to find an off-ramp ASAP.
It's a bad look because you're saying water
is doubling.
And this could take your jobs, right?
Yeah, it's terrible.
Whether you believe that's true or not, that is the perception.
There are two off-ramps that he suggested, which I think are worth considering.
Off-ramp number one is what's called a cross-subsidy, which is essentially to say that they pay a rate card, which they can absorb with all their free cash flow, materially higher than what other ratepayers would pay in that geographic area.
So, the homeowner, his or her electricity costs stay flat to down.
The data center costs are higher, and it's the Metas, the Googles, the Apples, the Amazons who have hundreds of billions of free cash, they absorb it.
That was idea number one.
And idea number two is to start to set up some mechanism so that they can install things like batteries at every single home in and around these data centers to allow those homes to have a better chance of actually absorbing some of this inflation without having to pay it.
That's a really good idea.
And this is playing out, Sachs, in Virginia, in a major way, because that's where data center alley is.
And 40% of the energy in Virginia now is going to data centers.
This is becoming acute.
So, what are your thoughts here, Czar?
Well, Chris Wright spoke to this pretty well at the All-In Summit in terms of what we have to do.
I mean, there's no question that AI is going to create a huge need for power over the next five or 10 years.
I think on a five to 10-year timeframe, the answer is probably nuclear, or at least that's a big part of it.
But nuclear takes at least five years.
Within the next five years, it's probably gas, you know, natural gas.
But the issue there is there's a huge backlog for gas turbines, like basically the engines that burn the gas to create power.
And there's like a two to three year backlog for those to spin those up.
So the question is: what do you do in the next few years?
And I think Chris Wright talked to this, and I've heard this from other energy executives, which is we just need to squeeze more out of the grid.
If we were to shed just 40 hours a year of peak to say backup generators, diesels, things like that, you could get an extra 80 gigawatts out of the grid.
This is what one energy executive told me.
The reason is because they build the grid and they have regulations on it based on the peak, right?
Which is basically the coldest day in winter or the hottest day in summer.
And the same way that you, you know, you build your church for Easter Sunday, the rest of the year it runs at 50%.
Same thing with the grid.
And so if they they could just reduce the peak 40 hours, if they could shed that load to backup, to generators, to diesel, things like that, then they could run the grid to squeeze an extra 80 gigawatts out of it.
And I think that's the bridge over the next few years that we need to then get a lot more gas and then eventually some nuclear as well.
But unless you want to keep talking about electricity, I think there's some other things to talk about on the open source because I think it's a pretty interesting topic, actually.
And
if can we just just go back to
the I was just trying to paint the case that my economic model for going to open source is better because I can't pay $3 an output token and then also pay for all this.
Yeah, actually, I want to, Jamatha, I want to ask you when you're running like Kimmy or something like that, so I think it would be good to just explain to the audience how this works because I think there's a lot of confusion.
about what it means to be an open source model.
A lot of people think that when a Chinese company publishes one of these models, it's still somehow theirs.
No.
But the reality is, once it's published, it's no longer theirs.
It belongs to anyone who wants to take that code.
And you're not running that on a Chinese cloud or something like that.
The data is not going back to China.
No.
You're taking that model and you're running it on your own infrastructure.
Right.
Can you just explain this?
Yeah.
So when I first started 8090, my only solution was Bedrock, which is a service that Amazon provides that allows you to essentially get inference as a service, right?
So as we are building our product and we need inference and we need inference tokens, Bedrock basically handles everything.
So it's what AWS is, but for this vertical of AI, right?
So they have the servers.
These are in American data centers.
They're managed by Americans.
And what they do is they take a handful of models and they make sure that they can support usage of those models.
That was how we started.
But as with everything, we have to manage our costs and our operating profile.
And so we're always looking for: are there other models and other places other than Amazon that can service our needs?
Because in fairness, Amazon is very expensive.
So a different company that I helped get off the ground, Grok with the Q,
they have a cloud.
And what they've been doing is they've been working with initially Lama.
Then they work with OpenAI to bring their open source model, but they also brought online a couple of these Chinese models.
And what they do, exactly as you said, Sax, is they take the source code, they basically implement that.
They fork it.
They
fork it.
And now it's implemented domestically
on American soil by Americans inside of an American data center.
So there's China gave us kind of the way, the roadmap, if you will, the architectural plans, but we, as in, you know, the American company, in this case, Grok, built the house and then launched it.
And so now we, as 8090, basically made a cost decision to move to this open source model because it was just materially cheaper.
Right.
And what Grok with a queue will give you, you're the application company 8090.
They're like Amazon for us.
They'll give you an API.
Exactly.
So the same way, if you want to use a closed model like OpenAI or ChatGPT, they'll give you an API.
You submit prompts.
They give you answers, basically tokens in, tokens out.
What Grok does is they will take this open source model, run it on its own infrastructure, and then give you the API so that you can then get tokens and tokens out through their API.
So for me, as a consumer, it reduces us to a pure economic decision.
Where is it cheaper?
And it's not dissimilar to the last generation of the internet.
You'd run on AWS, but then you'd bid it against GCP.
You'd bring in Azure.
You'd say, who is cheaper?
Because ultimately, you're running a database.
You're running, I don't know, pick, pick your service, Snowflake.
It didn't really matter where it was.
You were just really trying to find the cheapest vendor.
Right.
Now, here's what's compelling about it.
So first of all, like you said, it's cheaper to just run it on your own infrastructure if you know what you're doing.
Also, enterprises like it because it's more customizable.
And there's going to be a lot of fine-tuning of these open source models for specific applications.
100%.
And enterprises frequently want to run these models on-prem in their own data centers because they want to keep their own data on their own infrastructure.
But now the challenge is you've got these models that they're no longer Chinese.
They've been forked.
It's an American company, but they originated in China.
That's right.
And they could be running on some critical infrastructure.
And
that does raise issues.
I mean,
what is Grok doing, I guess, to test whether these models are safe, whether they can be backdoored?
I mean, how do they think about that?
They have an entire pipeline of stuff that they do.
The details of which I don't exactly know because I've not asked exactly what they run through.
But they're big rubbing things.
They go through an incredibly rigorous.
They basically do like safety testing to make sure.
Absolutely.
So, I mean, because a lot of people think that if you run a Chinese model, the data must be going back to China, but it's not if it's being run on your own infrastructure.
I think the issue is more theoretical: that, like, could a Chinese model somehow be backdoored with an exploit or vulnerability somehow?
Well, if you take a compiled version, sure.
But if you just take the open source and you do it yourself, no.
Right.
Well, that's the thing.
So, I mean, and if someone did discover a vulnerability, it would get widely shared in the community very, very quickly.
And then I would say that.
I think at this point, you can expect that every single major company that is in security, that is in a cloud vendor, and also every single major model maker is trying to prove and invalidate how the other models are inferior or bad in some way.
And so that's where the competitive cycle, I think, is really valuable because you do have the best and the brightest computer scientists.
Like, you know, yesterday, a certain person,
he's Italian, that's how I know,
the leading security guy at one of these model makers.
just talking to him.
He's in charge of this security stuff.
They're hammering everything to try to figure out whether there's a vulnerability because it slows these other folks down.
So that made me feel quite positive that we haven't seen anything yet on any of these models, which is to say that generally everybody has actually been a pretty good actor so far.
The other piece to this puzzle, Sachs, is there's a lot of crypto distributed projects.
The one I've been working on is BitTensor and Tau.
I think you've also done a deep dive on this, Jamath.
And I'm a partner in
an emerging crypto fund called Stillcore Cap, and we're buying Tau and we're looking at BitTensor and all of these subnets that are being made to do distributed computing.
And this is a big push for Apple as well.
A lot of these M4 Mac minis you've seen out there, their plan is to put all of this, all these LLM SACs on people's personal computers and then distribute them and have this like SETI at home and an incentive layer.
And I think that's going to be a big part of this.
People are not going to want their AI jobs to go to the cloud necessarily.
They might want to do it locally.
And I think that's where the phones and all this silicon is going with, you know, Apple's big focus on it.
It's going to be
a brave new world.
Yeah, you bring up an interesting point.
You know, in the early years of this AI revolution, I'm talking about like 2023, 2024.
I mean, it's just started in the last three years.
There was this analogy that AI was like nuclear weapons.
I mean, you hear the doomer crowd, the safety advocates saying this, that like AI was this really threatening technology.
And they would even say things like GPUs are like plutonium, you know, things like that.
And I think that model of the world is just wrong, right?
Because what we're seeing is, and Justin actually had a pretty good line about this.
He says, nobody needs nuclear weapons.
Everyone needs AI.
And it's true, like every consumer, every business is going to want to run AI.
A lot of them are going to want to run it.
on their own infrastructure.
Consumers are going to want to run it on their own phone.
You're going to have an AI that's highly personalized to you.
And so everyone's going to have AI.
It's not like a nuclear weapon where we want to stop all proliferation.
AI is first and foremost, a consumer product that is going to proliferate.
And so the question is,
bearing that in mind, how do you then create an appropriate response for the national security risk?
But this idea that we're just going to stop AI and only have two or three companies who have it, which I think was the view a few years ago among policymakers.
It's ridiculous, Steve.
Yeah, they were thinking in very centralized terms.
And I think what we're seeing now is, regardless of what certain policymakers might want, it's already highly decentralized, right?
You've got five major American disclosed source companies.
You've got eight major Chinese models.
And then you've got everything that's happening with startups.
So this is going to be highly decentralized.
And verticalized, right?
They're all the hugging face models.
There's specific ones on images, specific ones for video.
Like it's going to be super fragmented.
The vast majority of this activity is benign.
I mean, that's the thing.
These are business solutions.
These are consumer products.
These are viral videos.
Most of the stuff does not rise to the level of a nuclear weapon or something like that.
This is a good chance for us maybe to talk about AI regulation.
There is a lot of, and maybe we'll get it to Wikipedia as well, but there's a lot of states that are starting to look into regulating
AI.
California SB 53, the Transparency in Frontier Artificial Intelligence Act, is working through the system.
It's going to serve as a template possibly for other states.
It was introduced in January as an alternative to the more sweeping bill, the SB 1047.
This would require AI developers to conduct extensive safety tests before rolling out the models.
It got a lot of pushback from tech, obviously, and Newsom ultimately vetoed it.
But this new law focuses only on the most advanced large frontier models that we just talked about.
And it requires companies to release a framework for knowing how they're approaching safety issues, including standards and best practices, whatever that means, and however safety is defined.
These are models, I guess, in this definition, that have half a billion in annual revenue.
I don't know how they picked that out, but it requires these companies to release transparency reports before deploying.
So they're going to be like the App Store, I guess, if this gets through to approve frontier models with updates.
Oh, that sounds great.
You got to go to the government to release a new model.
Your thoughts,
David Sachs?
Yeah,
I think it's very concerning.
There's a regulatory frenzy happening at the states right now.
Just to be very clear about what happened in California, there was an original bill, SB, was it 1047, that was incredibly obtrusive.
Newsome vetoed that, but now they've passed a new one, which is called SB 53.
And like you said, it's not as burdensome and intrusive as the previous version.
It focuses on making frontier AI models report safety risks.
They're supposed to report if they have.
Can I stop you there for a sex?
What is the safety risk they're going to be required to report?
It's such a nebulous
term.
Safety, what?
That it's going to jump out of the computer and murder me?
Safety that it's going to give me the wrong answer?
They're supposed to report on potential catastrophic harms related to cyber attacks, bio threats, model autonomy, which is the Terminator scenario.
And they're supposed to
let the government know if there's a safety incident.
I mean, look, all these things are quite nebulous.
So it's almost like a nuclear power plant having to report if there was an incident.
Are any of these, in your mind,
thoughtful?
Jake,
let me just interrupt for a second.
I think it's the equivalent of saying, I need any factory to report to me on the risk of something of a nuclear explosion, even though the factory might not be working with nuclear material.
You see, it like it uses.
That's what I'm trying to get out here.
But yeah, I mean, it effectively uses terminology that makes everyone nod their head and say, oh, yeah, that makes sense.
That's a good idea.
When the reality is that the legislators have actually no concept of what they're talking about.
They have no concept of how these models are built.
They have no concept of how they're deployed.
And they're using language that they think is inevitably going to result in giving them ultimately tools and control over a private market system.
And that's fundamentally what I think a lot of this comes down to.
Think about this issue that's going on with free speech in California, this hate speech bill, SB 771, that's sitting on the governor's desk to be signed right now, where effectively the state of California's administrators have the ultimate say of what is deemed hate speech and not, which If you think about it, if they had this bill in Alabama during the civil rights era, there would have never been been the ability to have the protest and realize the equal rights that arose from the civil rights movement because the government would have said those are inappropriate hate speech things that you guys are saying.
And we're now putting those same tools in the hands of the legislators.
They're going to do the same thing with AI.
They're giving onerously powerful tools to the legislators to let them decide what is and isn't appropriate for private market actors when they actually have no sense and no sensibility about what they're talking about.
So, yeah, and actually, I think that's a really important point.
Just want to give you some stats on this regulatory frenzy that's happening.
So all 50 states have introduced AI bills in 2025.
There's been over a thousand bills in state legislatures.
118 AI laws have already been passed across the 50 states.
The red state proposals for AI in general have a lighter touch than the blue states, but everyone just seems to be motivated by the imperative to do something on AI, even though no one's really sure what that something should be.
Exactly.
And there's no real agreement on like what all these AI regulations are supposed to do.
So they're just making things up.
Or what the risks are.
Yeah, that's what I'm trying to get.
So Saxony, well, it's going to finish the point about California.
So look, California, they've kind of gotten to this point where now it's about reporting on all these safety risks.
And if this is all it was,
then it would just be basically a bunch of red tape and it wouldn't be so bad.
The problem is that you've got to multiply this by 50 states.
You've got 50 different states, each with their own reporting regime, which is going going to be a trap for startups.
They've all got to figure this out about what they're supposed to report on, what the deadlines are, who to report to.
I mean, this is like very European-style regulations, actually, maybe even worse than the EU, because the EU tried to basically harmonize to get to one authority.
We're going to have 50, they're going to have one.
But the other problem is that this is just the Campbell's nose under the tent.
So even in California, Scott Wiener, who's the legislator who did SB 1047, now he did this.
He's got a block of legislators, and they have 17 more AI regulation regulation bills that they want to pass.
So this is just the beginning.
And if you want to see where this is going, okay, look at Colorado.
We should talk about this Colorado bill because this has already been passed into law.
It's called SB 24-205, Consumer Protections for Artificial Intelligence.
It was passed all the way in May of 2024.
So it was one of the first to pass, even though they didn't really know what they were trying to regulate.
No one's quite sure how to implement it.
But what the law does is it bans something they call algorithmic discrimination.
Okay.
And algorithmic discrimination is defined as unlawful differential treatment or disparate impact based on protected characteristics.
So things like age, race, sex, disability.
If any of those factors drive an AI decision and it results in a disparate impact, then both the developer of the AI model and the deployer, which means basically the business that's using it, can be in violation of this law and they can be prosecuted by the Colorado Attorney General.
Let me give you a practical application here.
So let's say that you got someone like a mortgage loan officer who's reviewing applications.
Okay.
And let's say they don't even discuss race.
It's not on the form.
Okay.
They're just using race neutral criteria like a credit rating or financial holdings, something like that.
If the result of their decision nevertheless had a disparate impact on a particular protected group, its its decisions could be found to be discriminatory.
And moreover, the developer of that model could be liable, even though their model just gave an answer that under the circumstances was truthful.
The only way that I see for model developers to comply with this law is to build in a new DEI layer into the models to basically somehow prevent models from giving outputs that might have a disparate impact on protected groups.
So we're back to woke AI again.
And I think that's the whole point.
That's the whole point of this Colorado law.
Let's get Shamath in on this discussion.
Shamath, I think that this is really, really dumb, what's happening.
If you have 50 sets of rules, what you will have are some conservative versions of AI.
You'll have some progressive-leaning versions of laws.
These 50 series of laws will essentially just render this industry impotent and incapable of maximizing itself and actually doing what's necessary to drive productivity and GDP on behalf of the country.
There is no conceivable way, as Freiburg said, that anybody in Sacramento or Little Rock or, you know, name your state capital, will have the intellectual wherewithal to get to an answer as good as the federal government will and as SACS will, just to be totally honest with everybody.
So what should happen here is that there needs to be a complete moratorium and the federal government should be given the time to figure out what the framework should be so that there is a one size, one set of rules.
Now, if that doesn't happen and this is allowed to stand, there is a perfect example of where this has happened before, and that is in the car market.
Because in the car market, what happened was there is a complete set of rules in California
for emissions that is entirely different than the rest of the country.
And you can look and see what it did.
Now, that's just two sets of rules.
And what would you think that's
going to be?
Let me finish.
Okay.
And so what these two sets of rules, going from one set of rules to two, what did it do?
It drove most of these companies to go towards barely break-even or massively money losing.
It has been something that the entire industry has been fighting back on for now 10 plus years.
Now, can you imagine instead of two sets of rules, you have 50?
I think you know what the economic consequences will be.
You'll render this entire category incapable of being able to generate any positive economic output.
So, I guess the steel man, if we were to make one, is transportation, education, abortion, taxes, alcohol, cannabis, I think I mentioned.
Those are all state-cannabis is a poison, and it is the worst thing in the world.
Right, but for our children.
Okay, that's your opinion.
Great.
But should states states have some liberal or trash?
Okay, I know your position on that.
I'm talking about the different states, which is what should
kids' zombies.
Perfect.
What are I don't disagree with that statement?
The question I'm asking is, we let states, just to steel man this for the audience, decide how they want to execute against things like taxes, alcohol.
education, abortion, transportation.
Should David Freeberg states have some rights here?
This is the, I'm just stealing.
opinion, but no, no, no.
If this is the most transitional technology of our lifetime, shouldn't states have a say?
Or what's the argument for states having a say stealing that?
It's the United States.
It's a federated republic.
I am 100% in favor.
I think what we're pointing out is the idiocy of these decisions.
Number one, number two, so the internet created a virtual network system
for media, communications, content, productivity.
So, you know, we're talking about something that stretches across the federal landscape.
What needs to happen is there needs to be federal preemption.
So the federal government, Congress, needs to pass a law that says, here are the standards that we are going to set, or here's the rules that we think are relevant for AI.
Here are the things that states can and can't do if we want this country to succeed on the opportunities and advantages that will arise from AI.
The second thing I'll say is that much of the law that's being drafted by these state legislators are regulatory oversight laws, not laws that define a new civil or criminal penalty because of something you did that caused harm.
They are specifically written in such a way that they say, we need to have oversight, we need to have review, we need to have control over your systems because we get to review them.
They don't say, for example, if your AI kills someone, you are going to jail.
That is what they should say.
And in fact, one could argue that much of the civil and criminal statutes that already exist in the states cover much of the harm that is already being talked about as the potential safety risks associated with AI.
You don't actually need more because at the end of the day, if the AI system, the producer of the tool, the user of the tool, causes harm to someone or something or some business, there is already statute to protect against that harm.
The statute that's being drafted is all about oversight.
It is about giving the government the regulatory control, the ability to go in and interrogate and investigate and create approval systems on whether or not what you're creating as a private market business or citizen is appropriate to be used.
And it is one of many points of overreach that this federated republic has been able to withhold itself against historically.
And after 250 years,
the day may be up.
So, Sachs, in the case of a large language model being constructed in a non-thoughtful way so that it could be used to do cyber attacks and dox people or, I don't know,
be used for impersonation,
the law should be able to, I'm trying to think of a scenario here when they give the security things.
That would be concerning and the law should, I don't know, if OpenAI allowed their tool to go have
credit cards, that's already illegal, right?
It's already illegal to conduct a cyber attack.
And if you manage to take an AI model and use it as a tool to perform a cyber attack, that's still going to be illegal.
Same thing in Colorado.
They've got this bill that they want to outlaw algorithmic discrimination, but discrimination is already a violation of the law.
So what they're doing there is they're not just going after the business that's performing discrimination.
That's already illegal.
What they want to do is get into the tool itself.
And they want to make the developer liable if their model creates an output that that supposedly ends up creating a disparate impact in a decision.
And imagine if we did this with the internet.
Imagine if we went back to the start of the internet and we said, hey, if someone uses the internet to do something bad, therefore the government needs to approve everything that's done on the internet.
I mean, we could do it.
You can talk about mobile communications.
You can say, okay, Verizon's responsible if people use it in a terrorist attack.
Verizon's not responsible if people use it to coordinate a bank robbery.
That's so obvious.
So, yeah, this does seem like it's overreach.
Sax, what is the situation on Capitol Hill and having a conversation about creating federal preemption, passing a bill that says the federal government's going to set standards around AI utilization that states cannot kind of intervene on and creating a mechanism that allows this market to develop and allows things to prosper?
Well, here's the situation is in the Big Beautiful bill, there was a federal moratorium on state AI regulation.
And I think it was well-intentioned and well-motivated by the fact that we do see this huge knee-jerk reaction to state legislatures wanting to do something without knowing what it is they want to do.
However, there was not enough Republican support.
There wasn't enough Republican or Democrat support for it.
And I think that part of the reason why Republicans in particular have been opposed is just because there's so much anger at the big tech companies right now for all the censorship that happened during, especially COVID, but even before.
And you still see it.
You saw it with this Wikipedia news where they're banning all conservative publications from being sources.
There's just a lot of anger towards the big tech companies and tech bros.
And basically, there's a lot of Republicans who don't want to get on board with anything that is perceived as helping tech.
Now, the reality is, who does that ultimately benefit?
I mean, ultimately, it benefits the blue states who are in the lead on this type of regulation.
It's Gavin Newsom who just signed this new bill.
It's, you know, again, it's Jared Polis in Colorado who ultimately signed this Colorado law.
And if there is no federal standard, what you're going to see is that the blue states will drive this ban on quote-unquote algorithmic discrimination, which will lead to DEI being promoted in models, which is what the Biden administration wanted.
You will see the return of woke AI at the state level.
It's not something any Republican should want.
I mean, I understand the justifiable anger at these tech companies because their behavior in the past has been really bad towards conservatives.
I mean, they did engage in a lot of censorship, shadow banning, demonetization, debanking, all that kind of stuff.
So I get it, but we have to look at what the results are going to be.
And a single federal standard is the best way to make sure that we do not have woke AI, that we do not have insanely burdensome regulations that allow China to basically get ahead of us in this AI race.
And it's to ensure that we actually have truthful, unbiased AI instead of highly ideological AI.
Do you think you can get it done?
Let me go to polymarket.
The U.S.
enacts AI safety bill in 2025.
Not getting done this year.
2025.
Okay, well, here's the good news.
It doesn't really matter what I think.
The important thing is what President Trump thinks.
And in his July 23rd speech on AI, he was really clear that there needs to be a single national standard for AI.
He said it was impractical.
It doesn't make sense to have 50 different regulatory regimes and that that could cost us the AI race.
And he would like there to be a single federal standard, just like he promoted for vehicle emissions, because again, we didn't have a federal standard there.
And then it was California taking taking the lead.
And then the blue states set the standards.
President Trump didn't think that made sense for California to be setting the rules for the whole country.
So the feds preempted that.
And I think we should do the same thing on AI.
That's what the president basically said in his speech.
So I think the administration ultimately will support this.
And
I think more Republicans will come on board as they realize what the blue states are doing here is not helpful for conservatives.
It's not helpful for having an unbiased information environment.
I'm torn on this one.
I moved to the great state of Texas to get rid of,
to have certain freedoms that we have here that we don't have in other states.
And I kind of like the idea of states having certain rights, but I don't like the way these laws are being written.
So I remain torn, and the devil's going to be in the details on this one.
Chamoth had to bounce.
Well, do you like the Colorado law?
Would you like to have...
No, of course not.
So it's how these laws are executed that are my concern.
And I had this concern with gun rights in California.
Like, you should have the right to own a gun.
And then they're just like, well, you can't have a gun.
And it's like, okay, well,
you know, and then the states have to go back and forth in these lawsuits to see can New York City, San Francisco ban guns?
And one of the reasons crime is out of control in some of these places is because homeowners can't have guns and the stand-your-ground laws, et cetera, et cetera.
And one of the nice things about this country is you can pick a state where, hey, I want to live in a state where abortion's legal.
I don't want to live in a state where abortion is legal.
I want to live in a state without taxes, state taxes, ones with taxes.
You get to choose.
It's one of the powerful things, and we get to debate these things in real time.
So I do have a concern of centralized government and overreaching federal governments, especially with the way executive power is being deployed these days from Obama to Biden and to Trump.
This is too much executive power in my mind.
So I have concerns on both sides of it, but it's
a devil's in the details of the execution, and I trust you to come up with something good as our civil servant.
So come up with something good, Sachs.
Well, we will.
But
just to go back to one of your points on states' rights, look, there's a commerce clause in the Constitution, and the reason that exists is to create a seamless national market economy.
One of the reasons why the U.S.
has such a strong economy, why it's the number one economy in the world, is because we have a single national economy, which is the largest market for products.
Imagine if we had 50 separate markets, each with their own rules and regulations, and then doing business in the U.S.
would be like Europe.
Remember, one of the reasons why the U.S.
dominated the internet in the 90s is because if you launched a startup in America and you won the American market, you were basically right there in terms of winning the global market.
Whereas if you were in a European country and you won your local country, whether it was the UK or Netherlands or France or something, you would just won a small part of Europe.
And then you would have to go figure out all the rules and regulations to get into just the other 30 European countries, never mind the rest of the world.
It's that seamless national market that's given our companies the scale they need to then dominate across the world.
And if you restrict that by making every state have different laws for every product, we're going to lose that massive advantage that we have.
Here's the thing.
You know, I look at the car standards with which Shamaf brought up, Freiburg, and
Trump, I guess, doesn't want to have California having their own car standards.
That got rid of 70% of the pollution in California.
I was in favor of that.
I wanted to see higher standards, not lower standards, because I don't want to pollute.
And the smog over California was just, especially Los Angeles, was insufferable at times.
Those standards, which led the nation, which have led the world, did they add extra cost?
Of course.
But it made California a great place to live because it's car culture there and people were dying and taking years off their lives from the smog.
So that's an example of it, I think, working really well.
And I am for cannabis regulation and for it being legal.
And California led the country in that.
Whereas other states want to ban cannabis and they don't want to have higher standards for pollution.
I like the fact that California led in those two ways.
Now, it's all in the execution, of course.
And so, the problem is that because California is such a big market, those vehicle emission standards that may or may not have been right for California apply to every other state because the car companies can't manufacture different models for different states, nor should they have to.
So, they did, though.
So, practically, they did produce different models for different states, but yeah, it'd be different.
You want to have different AI models for every state?
You want to have a DEI model for Colorado?
You want to have have it.
In the case of cars, I do like the fact that California did push the car companies to make cleaner cars.
Now, in the case of AI, that's why I was asking you which safety concerns you have, because I'm trying to find a safety concern that we can all say is a legit concern for AI, and we can't come up with one.
So that's the interesting part about this is like they're obviously overreaching laws right now because we can't come up with something.
where AI is going to jump out of the computer and do something in the real world that regular laws don't account for.
We can't come up with an example here, and we're deep in this industry.
Can you come up with a single example of AI
doing something bad in the world that we should be concerned about that isn't covered by existing laws?
I can't.
Somebody in the audience figures that out, please email me.
Another amazing episode of the All-In Podcast.
Great to see you, Jamath, who had to jump, David Freeberg, and of course, my bestie.
My bestie, David Sachs, our czar, getting it done in DC for the country.
Well done, and we'll see you all next time on the all-in pockets.
Bye-bye.
Rainman David Sachs.
And it said, we open source it to the fans and they've just gone crazy with it.
Love you, S.
I the Queen of Kino.
Besties are called
13.
That is my dog taking a notice in your driveway.
Oh, man.
My appetash room meet me at what it's like.
We should all just get a room and just have one big huge orbie because they're all just useless.
It's like this like sexual tension that we just need to release somehow.
What you're about
to be your feet.
Where did you get mercy's artists?
I'm doing all in.
I'm doing all in.