Science vs. Silicon Valley with Adam Becker
In this conversation, Kara sits down with her “soulmate” to dissect and debunk the narratives that undergird the less-than-benevolent Big Tech agenda and uphold the status quo. They also discuss why some ideas, like Musk’s dream of colonizing Mars, are scientifically impossible; the fallacy of effective altruism; the probability of existential threats against humanity; and how all of these factors add up to more power and more control for the techno-oligarchy.
Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
And my video looks a little tilted.
Only I need to look prettier.
Hi, everyone, from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher, and I'm Kara Swisher.
My guest today is Adam Becker, an astrophysicist and journalist, and the author of More Everything Forever.
In it, he argues that Silicon Valley's biggest fantasies, from colonizing Mars to building godlike AIs, aren't just far-fetched.
They're a convenient cover for a racist, authoritarian power grab.
Oh, just the kind of guy I like to talk to.
Adam doesn't pull any punches, of course, neither do I.
And as an PhD astrophysicist, he actually knows what he's talking about when it comes to the science fiction tale Silicon Valley has been spinning.
I'm excited to talk to him because this is my wheelhouse.
I've been talking about these issues forever.
What a lot of this is nonsense.
Many years ago, I actually interviewed an astrobiologist who was telling me how ridiculous it was to want to live on Mars because it's miserable and we will die as small, stupid trolls.
And instead, they've decided to become small, stupid trolls on Earth all by themselves.
I just think this is critically important to keep being reminded.
These people do not have all the answers, and Adam does a great job in doing that in this book.
Our expert question comes from journalist and science fiction writer Corey Doctrow.
This is a smart one, so stick around.
Support for this show is brought to you by CVS CareMark.
Every CVS CareMark customer has a story, and CVS CareMark makes affordable medication the center of every member's story.
Through effective cost management, they find the lowest possible drug costs to get members more of what they need.
because lower prices for medication means fewer worries.
Interested in more affordable care for your members?
Go to cmk.co/slash stories to hear the real stories behind how CVS CareMark provides the affordability, support, and access your members need.
Buying a car in Carvana was so easy.
I was able to finance it through them.
I just.
Whoa, wait, you mean finance?
Yeah, finance.
Got pre-qualified for a Carvana auto loan, entered my terms, and shot from thousands of great car options all within my budget.
That's cool, but financing through Carvana was so easy.
Financed.
Done.
And I get to pick up my car from their Carvana vending machine tomorrow.
Financed.
Right, that's what they said.
You can spend time trying to pronounce financing, or you can actually finance and buy your car today on Carvana.
Financing subject to credit approval.
Additional terms and conditions may apply.
Thumbtack presents project paralysis.
I was cornered.
Sweat gathered above my furrowed brow, and my mind was racing.
I wondered who would be left standing when the droplets fell.
Me or the clawed sink.
Drain cleaner and pipe snake clenched in my weary fist.
I step toward the sink and then...
Wait, why am I stressing?
I have thumbtack.
I can easily search for a top-rated plumber in the Bay Area, read reviews, and compare prices, all on the app.
Thumbtack knows homes.
Download the app today.
Adam, thanks for coming on on.
I appreciate you being here.
Oh, thanks for having me.
I feel like we're soulmates.
I had to have you.
I, you know, I read your book and I loved it.
And, you know, something I've been talking about a lot.
And you actually put pen to paper and articulated everything I feel about some people, you know, the Silicon Valley,
largely men.
But let's start talking about this because your book, More Everything Forever, and I'll read the subhead, AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, kind of says it all.
That's kind of pretty much been my last 30 years.
So let's start with Silicon Valley has so many myths.
And one of my first stories when I got there was the lies Silicon Valley tells itself, including we're all equal, equal,
all we care about is community, we're here to help humanity, etc.
But they perpetuate this myth that's about liberty, science, protecting humanity.
And in your book, More Everything Forever, it's essentially a counter-narrative
that really does make the case that a lot of these leading tech billionaires, people like Elon Musk, Sam Alton, Mark Andreessen, Peter Thiel, and Jeff Bezos, are scientifically illiterate, wannabe authoritarians who will lead us to environmental collapse unless we stop them.
So talk about that messaging and the narrative.
I never believed it for a minute, but it was amazing how they stuck to it right from the beginning and probably believed it themselves on some level.
Yeah.
I mean,
look,
they really
think
that because they have more money than anybody else in the history of humanity, that means that they are the smartest people in the history of humanity.
And
that's just not true, right?
You know,
they are not experts on everything.
They are arguably not experts on anything other than how to make a billion dollars.
But they reject the idea of independent expertise, that people might know more than they do about science and technology because they think they've drunk their own Kool-Aid.
They're high on their own supply.
So where is the source of this?
Because one of the things that I noticed from the beginning is even when they weren't billionaires, they were like this.
This did not come from the money this came from an idea that they you know i always say people it's not what people lie to you about it's what they lie to themselves about what is your feeling of the of the origin story here i think that
there was a genuine desire to make money and also like you know These people are not all the same, right?
Some of them are clearly just cynical.
Some of them are true believers, right?
But I think there was an idea that came partly from science fiction and partly from the sort of like weird, you know, Californian mix of counterculture and libertarian ethos that there was like a happy alignment between the desire to make lots of money through technology and to save humanity through technology.
That there was a way.
to both make a lot of money and make the world a much, much better place by bringing about this sort of inevitable science fictional future.
But science fiction is at the heart of it.
It really is.
Exactly.
Yeah, yeah, yeah, yeah.
Science fiction is definitely where the ideas come from, right?
It's where Musk gets the idea that we need to go to Mars.
It's where Bezos gets the idea that we need to go out into space.
It's where Altman gets the idea that
super intelligent AI is inevitable.
So let's talk about the worldview that underpins these techno-utopian dreams, and we'll dive into the actual plans.
You wrote that Silicon Valley is awash in what you call call an ideology of technical salvation, which is both sprawling and ill-defined.
So define it for us.
What's the ideological through line that connects these disparate companies and personalities?
Yeah, I mean, basically, there's this belief that perpetual growth is possible and that through perpetual growth, you know, they will be able to solve all problems with technology and transcend all limits.
And those three things, the growth, the reduction of all problems to technological problems, including like political problems, social problems, you know, problems that are sort of inherently non-technical still supposedly could be solved with technology.
And, you know, I'm still waiting for somebody to explain to me, you know, okay, how do you solve, you know, the crisis in the Middle East with technology?
Like, that's not.
Right.
Or poverty or anything else.
Or poverty or, yeah, inequality.
One of the things that a lot of observers would describe it, and I have described this as libertarian light, because I don't even think they're full libertarians, but it's mostly it's leave me alone.
And I'll never forget Bill Gates basically saying that in the 90s, like just leave me alone.
I think that's what I got.
I know better.
Mostly leave me alone.
But you're describing something much more elaborate and really insidious in a lot of ways.
Yeah, yeah.
It's like taking that leave me alone idea and mixing it with
this
belief that science fiction is a roadmap and that you really can use technology to solve all of these problems and then transcend all of these limits, right?
Transcend mortality, transcend our existence here on Earth by going out into space, transcend conventional morality and legal limits, right?
Leave me alone.
I'm going to go to space.
I'm going to do whatever I want.
I'm going to live forever.
And anybody who comes with me will also live forever.
Let's get to these grand schemes themselves.
We'll start with going to Mars since it's easy to understand.
Elon Musk has built a whole personality around the idea that humans must become interplanetary in order to survive in the long term.
Broadly, probably not.
There's a non-zero, the way they would put it is there's a non-zero chance we're going to hit by an asteroid.
That's essentially their argument, or the Earth is going to collapse and we need to keep humanity going.
That's their basic argument.
Why do you find the idea of occupying Mars implausible?
And if so, if it's so unrealistic, why do tech billionaires believe that going to Mars is not only doable, but a moral imperative?
You know, Musk wants us to go to Mars as a backup for humanity in case an asteroid hits Earth.
Mars gets hit by more asteroids than Earth does because it's closer to the asteroid belt.
It's a terrible place, right?
The radiation is too high, the gravity is too low, there's no air, and the dirt is made of poison.
And that's not even like a full list of all of the problems with Mars.
You know, if you were on the surface of Mars without a spacesuit, you would die almost instantly.
You know,
you would asphyxiate as the saliva on your tongue boiled off because the air pressure is less than 1% that of Earth and there's no oxygen.
And if you were in a spacesuit hanging out on the surface of Mars, you would still die, like assuming you had all the food and water you needed, you'd still die in a few years because the radiation levels are way, way, way too high.
Because the things that protect us from radiation here on Earth, the Earth's magnetosphere and atmosphere, Mars doesn't have those.
And so you'd have to live underground in pressurized tunnels, somehow keep all of that toxic dust out of your habitat.
Musk wants to terraform Mars.
We don't have the technology to do that.
The schemes that he has proposed for doing it absolutely do not work.
He's been told that over and over again, and he just denies it.
Let me give you a pushback.
What if he's Christopher Columbus, right?
Don't go, you know, you're going to fall off this thing of the Earth.
But, you know, that I've heard that from them.
Yeah, absolutely.
Okay.
So, so a couple of things.
First of all,
like the myth of Christopher Columbus, you know, proving the earth is round, that's a myth, right?
That's correct.
That's what I wanted you to say.
Yeah, go ahead.
Yeah, exactly.
Yeah.
People at the time knew that the earth was round.
And the reason people were pushing back on Columbus's scheme was not that they thought the earth was flat and that he'd fall off, but that they actually knew how big the earth was, because that's also something we've known for a very long time.
And so they said you're gonna starve if you go that way you're not bringing enough provisions you are going to starve before you get to Asia and he would have starved if the Americas had not been there but the Europeans didn't know that the Americas were there right right they thought it was a big ocean Columbus had an inaccurate estimate of how large the earth was he got very very lucky that the Americas were there
and then you know went and killed off an enormous number of people to the point where, like, even by the European standards of his time, people thought that he was being incredibly brutal.
And so, you know,
you could say that the only thing that Musk and Columbus have in common is that they're both horribly racist.
And also, with all of the difficulties that Columbus faced, what he wanted to do was still much, much, much, much easier than what Musk is trying to do.
Than going to Mars.
Musk doesn't really know anything about space.
He doesn't know anything about Mars.
If he did, he would know that everything he has said about Mars is a complete fantasy.
It has to do with H.D.
Wells and everything else.
So let's go to AGI, artificial general intelligence.
It's an amorphous concept that more or less means reaching the point where AI can outperform humans at any task.
Doomers believe AI alignment is the single most important issue facing humanity.
If we achieve AGI, and its goals aren't aligned with ours, it will...
kill us if it really cared.
You can ask the Zoomers like Sam Altman, AGI will essentially solve all of humanity's problems.
First, explain AGI and the idea of singularity becoming foundational in the techno-utopian projects.
And lately, they've been doing it a lot.
They seem to be like on an extra dose of ketamine because they've just been going on and on about AGI recently.
And second, why are you skeptical of the entire premise behind AI?
You don't even think it's
you if we should call it intelligence.
Yeah, yeah.
I mean, okay, so
AGI, artificial general intelligence, is notoriously difficult to define, which is part of the problem, right?
The sort of vague definition that's usually given is, you know, an AI that can do everything that a human can do, or is that human-level intelligence?
I think the real definition, the true definition is AI like we have in science fiction.
Which would be Jarvis or whatever it happens to be.
Yeah, Jarvis, Commander Data, HAL, whatever.
You know, like if you take a look at the Open AI charter,
they have a definition of AGI in there.
And the definition that they use is something that can reproduce any economically productive activity that humans engage in at a human level.
And like, okay, first of all, that's still pretty vague.
And second, economically productive?
Why is that the measure?
Like, there's so many important things that we do that are not economically productive.
Like, I don't know, you know, having a long conversation with a friend.
But the dream is still this dream of AGI and singularity, the idea that once you get to AGI, it will then be able to design an even better and smarter, more intelligent AI, and then that will design an even smarter one and so on and so forth in short order.
And so once you get to AGI, you very quickly get to super intelligent AGI that is smarter than all of humanity combined.
And
I'm skeptical of this in part because there's no sign that anything like that is on the horizon.
You You know, these generative AI tools are
interesting.
Yeah, they're interesting.
They can do some interesting things, but they have so far proven to be pretty bad at almost everything that people have tried to sell them to us for.
Some things they're good at.
Yeah, some things they're good at.
Yeah.
The easy stuff is certainly, they're certainly better.
It's like a mimeograph machine versus a computer thing.
It's just like, that's better.
The computer thing is better than a mimeograph machine.
Yeah.
That to me is the advances, the so-called advances.
Yeah, I mean, like, it's better at stringing together coherent sentences, and it can be useful for solving certain well-defined scientific problems like protein folding.
It can make things faster.
There are positive things, but what you're talking about is a super intelligence.
Yeah.
Based on our intelligence, it just becomes super, which you start with our...
intelligence, meaning our intelligence is going to make us a more intelligent intelligence, right?
So they're starting with us, not something else.
Yeah, yeah.
They want it to be as smart as we are and then, you know, move beyond.
Right.
And like, it's not, it's clearly much worse than humans at almost everything right now.
Does it have to be?
Is it, you know, because look, the early internet was pretty.
glum and then it was okay and then it was better and better.
Yeah, but that was that was mostly about like people putting stuff on the internet, about people learning how to use the internet better and, you know,
and also like the continuation of Moore's Law, right?
You know, the continued increase in power of computers
made it possible to, yeah, and cheaper, it made it possible to put more computationally intensive stuff on the internet, like video, right?
And that's part of what made it better.
Moore's Law is over.
The chips are not going to be getting appreciably smaller and faster ever because we've already already hit like the atomic limits.
You can't make the transistors really much smaller than they already are.
And this is exactly what Gordon Moore said was going to happen.
He said Moore's Law is going to end sometime in the 2020s, and here we are.
So, when you think about AGI, where do you put it right now as a tool, right?
A possibly better version of the internet, a more steroid version of the internet, subject to the abuse and subject to good stuff.
Yeah, I think that the AI stuff that we have right now is an interesting tool that was built in ways that are seriously troubling and that has an enormous carbon footprint, an enormous human cost to the training.
They stole a lot of content in order to train them up in the first place.
But even if you put all of that stuff aside, you are left with something kind of interesting that can make certain tasks easier, like, say, writing code.
It is good for that.
We'll be back in a minute.
Support for this show comes from Robinhood.
Wouldn't it be great to manage your portfolio on one platform?
With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs.
Trade all in one place.
Get started now on Robinhood.
Trading crypto involves significant risk.
Crypto trading is offered through an account with Robinhood Crypto LLC.
Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.
Crypto held through Robinhood Crypto is not FDIC insured or CIPIC protected.
Investing involves risk, including loss of principal.
Securities trading is offered through an account with Robinhood Financial LLC, member CIPIC, a registered broker dealer.
Avoiding your unfinished home projects because you're not sure where to start?
Thumbtack knows homes, so you don't have to.
Don't know the difference between matte paint finish and satin, or what that clunking sound from your dryer is?
With Thumbtack, you don't have to be a home pro.
You just have to hire one.
You can hire top-rated pros, see price estimates, and read reviews all on the app.
Download today.
On top of building this fake volcano for months, I give my daughter SmartyPants vitamins to support her brain health.
So her science fair project sounds more like
and less like.
And while I may say it's not a competition, of course it's a competition.
Choose Smarty Pants Vitamins to support your kids' brain health and save the science fair.
Shop on Amazon, smartypantsvitamins.com or at Target today.
So one of the things you write in the book, quote, lurking underneath all the dreams and desires and resentment of the tech billionaires lies a fear of death and a final loss of control.
So you've latched on the idea called transhumanism, a sort of secular religion that says we can transcend our biological bodies and upload our consciousness into the AI.
One of the people working on this is Sam Altman.
There's many others.
There's lots of them doing it.
And you can see it physically manifest in someone like Jeff Bezos, for example.
How seriously do tech billionaires actually take this idea?
And how does it shape their...
the political and moral assumptions they make?
They do seem to take it very, very seriously.
This is, I think, a lot of the idea, not just behind the AI companies, but behind companies like Neuralink.
This is Musk's Neuralink.
Yeah, Musk's company, Neuralink.
But there's others.
There's others.
There are, but that's like the most notorious one.
The idea there is to, you know, bridge the gap between computers and the brain.
This is also part of why I am so skeptical about the idea of AGI.
The brain is not a computer.
You know, a lot of this stuff is premised on the idea that the brain is a kind of computer.
And it's not.
It's just, it's not.
It's an evolved organ.
But I think that there is like a real faith in the idea that you can transcend these biological limits, which is like the main project of transhumanism, the idea that we don't have to be confined to our bodies and their limits as they are now.
That we can upload our minds into computers or make ourselves into cybernetic organisms and greatly extend human lifespan, go out into the cosmos and colonize the universe.
And all of that's just pure fantasy.
What's one that you're like, okay, this is interesting?
This is some, what they're working on?
So I think
that
basic
brain
computer interfaces actually kind of are interesting in that like they could, in theory allow people to regain capabilities that they lost due to some sort of accident or injury like you know like if they can't move their legs a brain computer interface could maybe
maybe let them control like a wheelchair more efficiently or something like that or if they can't control like if they're if they're quadriplegic a brain computer interface might allow them to control a substitute for their hands or something like that there is some work in there that has shown some promise.
I think that's cool, right?
You know, and I think that that's useful.
The problem is then taking that step and saying, okay, and then once we do that, we'll be able to, you know, upload the entire brain into a computer.
That's just nonsense.
Right.
And it's not going to happen.
Now, effective altruism is the idea that a good way to make the world a better place is to make a lot of money and give it away to worthy causes.
And long-termism is the idea that we have a moral obligation to future generations.
Both seem fairly benign, not like the grandiose plans we've been discussing, but you argue they've created a toxic self-serving philosophy that justifies extreme inequality.
Walk us through your reasoning.
Yeah.
So first of all,
yeah, it sounds good.
We should care about future generations.
We should try to put more money toward worthy causes.
But the devil always is in the details, right?
First of all, relying on philanthropy has its own problems.
Billionaire philanthropy is democratically unaccountable.
It's an exercise of power.
It would be much, much better if we could fund worthy causes through
some democratically accountable means like government funding.
And that way, we can all collectively make decisions together about where the money should go.
Of course, we've seen that Musk doesn't care about that and has cut USAID.
He doesn't give anything away.
You don't have to worry about that.
Yeah, of course, he doesn't care.
Yeah.
But on the other hand, I just described effective altruism, right?
So what's the problem?
The problem is that, first of all, there are some problems that you can't just solve by throwing money at them, right?
If you want to create like systemic change and address problems like, oh, I don't know, massive wealth inequality,
you can't just throw money at that problem.
You have to like commit to systemic change in some way.
But the other thing is
there is a utilitarian philosophy that comes along with effective altruism.
The idea that what we need to do is make the most happiness and reduce the suffering the most in the world.
And that, you know, this is something that can be quantified.
And this leads the effective altruists and especially like this influential subgroup within that movement, the long-termists, to the idea that what we really need to do is ensure that there are as many people in the future as possible living lives that are at least least barely worth living.
And so this creates what one of the leaders of that movement, Will McCaskill, called a moral case for space settlement, which, again, is nonsense.
That's not happening.
And it also leads them to prioritize what they call existential threats to humanity and human civilization over other pressing problems.
Then you get into questions like, okay, what counts as an existential threat and which existential threats are more pressing?
And they have a very bad track record of answering these questions well.
Give me an example.
Yeah, yeah.
An example is Toby Ord is a leader in this effective altruist movement who has pushed long-termism.
And
he came up with estimates of the
severity of or probability of different existential threats causing either the extinction of humanity or unrecoverable collapse of human civilization in the next hundred years.
And know, if you asked me to make a list like that, or if you asked, I think, like most experts in the subject to make a list like that, top of the list would be things like global warming, nuclear war, right?
Maybe a pandemic.
And Or does rate pandemics pretty highly, especially an engineered pandemic.
And that seems reasonable.
But at the top of his list is the threat of a super intelligent AGI wiping out humanity.
And he rates that as 50 times more likely than collapse or extinction from climate change and nuclear war combined.
And when I asked him why, his answer was essentially, oh, I made those numbers up.
It was my best guess.
Right.
And the man is an Oxford philosopher, right?
That gives him a platform, right?
Power and influence.
He has been advising UK parliament on AI issues.
And I think it's really irresponsible for him and others in that movement to make these claims based on very, very little.
No, Adam, they make a lot of things up.
That's been my history with them.
So let me pull something up you wrote.
You can read it out loud.
Then I'll ask you a question about it.
Sure.
Silicon Valley's heartless, baseless, and foolish obsessions with escaping death, building AI tyrants, and creating limitless growth are about oligarchic power, not preparing for the future.
The giants of Silicon Valley claim that their ideas are based on science, but the reality is darker.
They come from a jumbled mix of shallow futurism and racist pseudoscience.
How did eugenics end up driving so many of their grand schemes?
And do they really grasp how deeply racism underpins their plans?
Because most of them would strongly reject the notion that they're racist.
Yeah, well, but I think most racists will strongly reject the idea that they're racist, right?
It doesn't mean they're not racist.
I don't know
say,
Mark Andreessen, just to pick one of these guys, I don't know if he understands how
deeply enmeshed his worldview is with eugenics and racism, but it is.
Explain that.
Give me an example.
Yeah.
So,
for example, just the idea of intelligence.
Just the markers that they use for intelligence, the idea that IQ is a good measure of intelligence, which you'll see over and over again in the writings of these billionaires and the subcultures that they fund.
You know, IQ is not a measure of inherent intelligence.
There is not, as far as we know, a single number that you can call, you know, intelligence.
And yet, you know, the notion that IQ
is really like deeply important.
is a racist notion because IQ is, you know, not actually measuring intelligence.
It's been shown over and over again to have cultural biases.
Another example, and maybe this is a more immediate and direct one,
if you go and look at like
Musk's plans for Mars, he talks about backing up humanity on Mars.
Like who makes the decision about like who gets to go to Mars?
Who gets to decide who is worthy of going to space?
What cultures and ethnicities get to be backed up on Mars?
The space program historically has excluded a lot of people and, you know, has favored people who look like me.
Seems to be mighty white,
is what you're saying.
Yeah.
Talk a little bit about how they reject the idea that they are, that they're, they're, that they're a meritocracy is typical of their arguments.
Yeah, exactly.
But then, you know, you come back to the question: okay,
you're a meritocracy.
How are you measuring merit?
I think that they believe
that being racist means that you want to be mean to people of a different color to their face by like using slurs.
And like that's not what racism is, right?
Racism is when you reinforce a system of repression against people of a certain race.
And like that's what these guys are doing very explicitly.
So let's talk about the consequences, the real-world consequences of these fantasies.
Google just released its yearly environmental report.
It says that emissions have gone up 50% since 2019.
A separate report by an advocacy group actually found Google's emissions had increased by 65% during the same period.
And Google reports its electricity consumption from data centers has doubled since 2020.
AI is clearly using a tremendous amount of energy.
They're obviously talking about using nuclear facilities and everything else.
But if you ask some AI proponents, they say artificial intelligence will come up with a solution to global warming.
You say global warming requires social and political solutions, not technical solutions.
There is probably a combination of these things, but talk a little bit about that, the energy usage, because it's off the charts at this point.
Yeah, it is.
I mean,
just
the amount of energy needed to run
these
generative AI systems is truly enormous.
Like
just one statistic.
The AI-powered Google search,
one search query takes 10 times as much energy to answer now than it did before they integrated generative AI into the search solutions.
Gemini.
And I think most people are annoyed that they did that.
It doesn't make the search better.
It made it worse.
We all want old Google back.
And so they are expending 10 times as much energy to make their product worse.
So here's something that's not in my book because it happened too late for me to put it into my book.
Eric Schmidt, you know, former CEO of Google, tech venture capitalist, billionaire.
He said in, I think, October
that we're not going to meet our climate goals anyway.
So we should use more energy and more resources and pour them into AI so we can get to super intelligent HEI.
And then that will tell us how to solve global warming.
You left out the faster than the Chinese.
That's usually stuck in there somewhere.
Right.
Yes.
There's the faster than the Chinese part as well.
Yeah.
So essentially, it's a problem we're never going to solve, and therefore, we should just use more energy to find a solution to the problem technologically.
That's the circular logic.
It's a circular logic.
And we don't need much more by way of technological solutions to solving global warming.
At this point, the primary barriers to solving the climate crisis are social and political, not technological, right?
Like we have cheap, clean energy.
We just need to, you know, get through the various barriers to deploying it.
And a lot of those have to do with government subsidies and interest groups and whatnot.
And, you know, that's not a technological problem.
That's a problem of persuasion and politics.
Yeah, I'm not surprised he said that.
My favorite nickname for him is that fucking guy.
We used to have a thing at Code where he would say crazy stuff and we had a ball gag, you know, a red, you know, a ball gag.
You ever seen them?
They're sexual.
But every time he said something dumb, we put the picture of him with a ball gag and then whatever he said.
And we'd say, that fucking guy talked again.
So we had to put the ball gag on him.
I mean, look, you know, I had an idea in my head of what my book was actually titled rather than more everything forever.
I thought of it as these fucking people.
These fucking people.
Yeah.
So.
I would have been a good idea that maybe that's my next book.
So the burn book is kind of these fucking people.
Yeah, exactly.
So at its core, the book is saying that more everything forever mentality leads to less real life for regular people.
But AGI colonizing Mars and transhumanism seem so far.
In fact, it's not obvious how far off projects really do affect the public.
So, does wasting money and energy on them simply exacerbate existing problems like racism, income inequality, global warming, or do they create new intractable problems?
I mean,
I think that it's a little of both, right?
Like, I think
that
certainly the idea of these, you know, high-flying ideas that don't work has created cover for these billionaires to amass more power and wealth.
And that's, you know, not just exacerbated existing problems, but sort of like the amount of power and wealth that they have at this point is so extreme that I would argue it's created like a sort of new kind of problem, right?
Just because it's such an extreme concentration of wealth and power.
And so, you know, they are able to like openly support fascism and, you know, and like still
go about doing their business in ways that
would have previously been unthinkable if they'd taken those stances even just a few years ago.
So as we discussed, you think tech billionaires have an authoritarian worldview.
They do.
And many of them have embraced President Donald Trump, who seems like an aspiring authoritarian at the very least.
Do you see Trump and his tech industry moving beyond standard Republican deregulation and working together on an actual authoritarian project?
It's unclear because of the breaks that are happening rather quickly.
Yeah.
What would it look like if that was the case?
I think it's over the minute.
I think it's over already, personally, because they've squeezed the lemon as much as they can on some levels.
But do you see it continuing?
And who is the more dangerous authoritarian group, the tech people or Donald Trump?
Yeah, it's a good question.
Which one's more dangerous?
I don't know.
They're dangerous in different ways, right?
Like Trump is dangerous in all of these very obvious ways of like eroding and destroying confidence in democracy, democratic institutions, guardrails, eating an entire political party.
But the tech billionaires are going to be with us for longer.
And not just because they're younger, but because they're unelected, right?
And like, yes, Trump is trying to, you know, transform America into an authoritarian state and he may succeed.
You know, he's already succeeded in a lot of ways that are horrifying.
But ultimately, there is hope that he can be stopped or those changes can be halted and reversed through organizing and at the ballot box.
Doing that with the billionaires is a lot harder.
So I feel like the tech billionaires, like if I had to pick one, are the bigger problem because they're going to be with us for longer.
We'll be back in a minute.
In the time it takes us to say we're using Folger's Instant Coffee, seamlessly blended with water and ice, a splash of whatever kind of milk is your thing, and gotta get that caramel drizzle.
All to make a toasty, roasty, caramel iced coffee.
You could be enjoying it.
Every damn
sip of it.
Damn right.
It's Folger's instant.
Are you ready to dairy free your mind?
This summer, melt away your dairy-free expectations with So Delicious Dairy-Free Frozen Desserts.
Enjoy mind-blowing flavors like salted caramel cluster, chocolate cookies and cream, cookie dough, and more.
For over 35 years, So Delicious has been cranking up the flavor with show-stopping products that are 100% dairy-free, certified vegan by Vegan Action, and are so unbelievably creamy, your taste buds will do a double take.
Dairy-free your mind.
Visit so delicious dairyfree.com.
Running a business comes with a lot of what-ifs.
But luckily, there's a simple answer to them: Shopify.
It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need.
From website design and marketing to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality.
Turn those what-ifs into
sign up for your $1 per month trial at shopify.com slash special offer.
So every interview, we get an expert to send us a question for our guests.
Now let's hear yours.
Hi, I'm Corey Doctorow.
The big question I would ask is that sometimes technical breakthroughs really do change the game, whether that's antibiotics or packet switching or other more modern inventions.
Obviously, everyone who comes up with a technical idea wants to market it as one of these game changers and not some little incremental effect.
I guess what I would ask is, how do we know when someone has got one of these big game-changing ideas?
And how do we know when they're just tinkering in the margins?
And how do we assess those claims?
Yeah.
No, that's a really good question.
And thank you to Corey for asking it.
Part of why that's a good question is the real honest answer has to be, we can never be completely sure.
But there are some signs, right?
And to me, the most reliable answer to that question, which is not always going to be right, but it's often right.
The real breakthroughs tend not to be hyped right out of the gate.
They tend to be, hey, we might have something interesting here.
You know, we've got this very interesting looking result in this Petri dish, and we're not sure.
but it seems like it may be killing off bacteria.
We've got this interesting result with silicon, where it seems like you might be able to use it as a semiconductor, but we're really, we're not sure.
Of course, there are
examples of real game-changing technologies that were hyped straight out of the gate.
Electricity.
Yeah.
Although, even electricity, it took a while to develop, right?
You know, you can't use this as a hard and fast rule.
But the other thing is,
I would say
that the most reliable guide is also the hardest thing to do, right?
The hard answer to the hard question is you look at it skeptically.
Always, yeah.
And you say, okay,
sure.
Can it really do this?
Are we sure?
And, you know, with something like electricity, the answer was relatively clear early on.
Oh, yeah, this is actually extremely promising.
And, you know, the same thing with, say, nuclear power.
Whereas with a lot of these technologies that I've been ragging on, a skeptical look makes them look less likely, not more likely.
Less than interesting.
Now, you've also said, quote, we don't need more Elon Musk.
We need at least one fewer Elon Musk, which is funny.
Putting aside the last few years and his descent into far-right politics for the moment, don't we also need these creative geniuses who push the boundaries of what's possible?
The risk-takers in fields like electric vehicles, reusable rockets, satellite internet, the 21st-century equivalents of Thomas Edison, Nicola Tesla Henry Ford, Alexander Graham Bell, some of them are obviously deeply flawed.
I'm Henry Ford, principal among them.
There's a point where we do need those inventions.
And I would say Musk really does get credit for pushing forward electric vehicles.
He didn't invent it, but he, same thing with Steve Jobs, right?
Very much pushing forward, not the inventor, yet critical.
Would you rather live in a world without these inventions?
I'm playing Devil's Advocate here.
I just don't think that that's the choice that we're facing.
Okay.
Like you said, Musk didn't invent the electric car.
Sure,
he pushed it forward, but Tesla existed before he came along.
There are other electric car companies.
It was the kind of thing that was going to happen with or without him.
I would argue that most of these tech billionaires, really all of them, are people who
You know, insofar as they themselves created these innovations at all, rather than like, you know, just being the person at the helm of a company that did,
that they were things that were, that would have happened without them.
And if it hadn't been them.
Inevitable.
Although I would say without Elon Tesla was going to be another traffic accident.
That's probably right.
Yeah.
And it was a question of someone who could push it through with such risk-taking.
Risk-taking is one of his best qualities, actually.
Sure.
But there's a difference between risk-taking and recklessness.
Right.
Correct.
And he's crossed over.
Yeah, most most certainly.
I also think that,
yeah, okay, maybe Tesla would have gone down without him, but there are ways of pushing that kind of technology without being the kind of
monster that Musk is.
Yeah.
They often come hand in glove, unfortunately.
So toward the end of the book, you write, for example, the fact that our society allows the existence of billionaires is a fundamental problem at the core of this book, and you propose 100% wealth tax on personal net worth of over $500 million.
Now, Mandami just noted this, and everyone lost their ever-loving minds.
Is the real problem Silicon Valley's ideology of technological salvation, or is it capitalism itself?
If tech weren't the dominant industry right now, if it were agriculture, oil, all big industries, shipbuilding, coal, name any of them, would we be dealing with the same core issues of exploitation?
We did, I think, in each of these areas for a long time.
This is not a, it's, this could be just a different twist.
Yeah.
And talk about the not having billionaires.
Yeah, no, I think that there is something in common here with, you know, all of these other industries that have dominated society at various times.
I do think that there is something sort of unique about the kinds of narratives that are spun by the tech industry.
I don't seem to recall like, you know, the 1980s Masters of the Universe in
Wall Street and financial industry claiming that what they were doing was bringing about a permanent utopia for humanity.
No, they didn't.
No.
Yeah.
And while those sorts of billionaires and other billionaires in other industries have often had really weird ideas, they have been not of the same kind.
Of the religious kind.
Yeah, exactly.
They're very religious in a weird way.
Yeah, I mean, but not having billionaires.
Look, you know, I think that a lot of what's happened in this country over the last,
at this point, 10 years, has shown like the kinds of risks that we as a society take by having billionaires by allowing that kind of concentration of wealth it erodes the democratic fabric of the country and at this point our democracy is in mortal danger and may already be lost okay and that's awful so so you you imagine that passing no no yeah
i mean i think everyone wants to be a billionaire I think that everyone wants to be Superman too.
Right.
But nobody actually thinks that they're going to gain superpowers.
Right.
Right.
And if everyone's Superman, no one's a Superman.
Right.
So the book is full of real-life characters who exude hubris, powerful men who want to summon godlike powers to create new worlds and escape death.
And it's all inevitably doom.
It's right out of Greek tragedy or myth, essentially.
In this metaphor, the tech billionaires are like Icarus.
They've gotten humanity strapped to their back, so they fly too close to the sun.
We'll all go down with them.
So let's give us reasons for hope and optimism.
I hate to say that to you because it's not an optimistic book, I would say.
But what is your most optimistic?
Is that we're on to them or they will die?
Or how do we build safeguards?
So there's a better ending to this story.
I mean, I think that the fact that we are onto them is actually really important and optimistic, right?
You know, like
there was for a long time a narrative about
Musk, like a story about Musk in our society, that he was this great genius who is going to save us all.
And I never bought that, but a lot of people did.
And for a long time, people were confused by the fact that I didn't like Elon Musk.
And they would say, but Adam, you know, you're an astrophysicist.
You like space.
Why don't you like Elon Musk?
He loves space too.
And I would say, no, I really don't think he does, but
not the real stuff.
He has this fantasy.
You know, the fact that now there is widespread
distrust and distaste for Musk and
most tech billionaires, I think, is actually very hopeful because that's the first step that we need to, you know, make the changes that we have to make if we are going to, you know, save our democracy and safeguard it from these tech oligarchs.
When people ask me, what hope is there?
The answer that I generally give is like we have to organize against these people.
And part of the reason that answer always feels kind of unsatisfying, I think, is it sounds boring.
It sounds unsexy.
And it's like this boring, unsexy solution to a big looming problem that feels larger than life and intractable.
But I think that like the history of
politics and the history of humanity has shown that often it is boring, unsexy solutions that win out and actually solve our greatest problems, right?
Because there there is, for example, something very boring and unsexy about developing a vaccine, right?
There's something very boring and unsexy about like doing the administrative work you have to do to build like a healthy welfare state.
There is something boring and unsexy about like building.
a better computer.
And yet these things can solve real problems and have solved real problems.
Right.
That's a very good answer.
Thank you.
It's a very difficult thing because, as you note, the money and the power, because they go on and they never stop and they never change, which is real, and they get worse in many ways.
Musk is the perfect example of that.
But the others are, I think, more dangerous.
I think Musk is just
more troubled and more, has other issues going on.
But someone like a Bezos or
Zuckerberg in particular, I called him the world's most dangerous man for a reason.
And I stand by that to this day.
And part of it is ignorance, which is really difficult, you know, ignorance and ineffectiveness and lack of expertise.
And I think, you know, on some level, you appreciate, you know, if you're very wealthy, you give away money, but in a lot of ways, it comes with such a price.
And the learning curve for them is so high.
At one point, I wrote a piece called The Expense of Education of Mark Zuckerberg, and I meant at our expense, not his.
And I think that's where we are, unfortunately.
But I agree with you when being onto them is the beginning of the steps of doing so.
Even if you have hope that some of their things can help us all in some way.
Anyway, I really appreciate it, Adam.
This is a great book and everybody should read it.
It's called More, Everything, Forever.
And it's, you don't want more.
You don't want everything and you don't want it forever.
But I appreciate it.
Thank you, Kara.
On with Kara Swisher is produced by Christian Castor-Rousselle, Kateri Yoka, Megan Burney, Allison Rogers, and Kaylin Lynch.
Nishat Kurwa is Vox Media's executive producer of podcasts.
Special thanks to Skylar Mitchell.
Our engineers are Rick Kwan and Fernando Aruda, and our theme music is by Trackademics.
If you're already following the show, we get a healthy welfare state.
If not, you're a small, stupid troll.
Go wherever you listen to podcasts, search for On with Kara Swisher, and hit follow.
And don't forget to follow us on Instagram, TikTok, and YouTube at On with Kara Swisher.
We'll be back on Thursday with more.