The Jordan B. Peterson Podcast

515. Ethics, Power, and Progress: Shaping AI for a Better Tomorrow | Marc Andreessen

January 16, 2025 1h 42m Episode 515
Dr. Jordan B. Peterson sits down with entrepreneur and software pioneer, Marc Andreessen. They discuss the timeline of the woke institutional takeover, the ruinous effects it has had on Western ideology and business, the ways in which AI will shape society, and the immense responsibility we have to instill the future with an ethos and morality that serves human flourishing. Marc Andreessen is a cofounder and general partner at the venture capital firm Andreessen Horowitz. He is an innovator and creator, one of the few to pioneer a software category used by more than a billion people and one of the few to establish multiple billion-dollar companies. Marc co-created the highly influential Mosaic internet browser and co-founded Netscape, which later sold to AOL for $4.2 billion. He also co-founded Loudcloud, which as Opsware, sold to Hewlett-Packard for $1.6 billion. He later served on the board of Hewlett-Packard from 2008 to 2018. Marc holds a B.S. in computer science from the University of Illinois at Urbana-Champaign. Marc serves on the board of the following Andreessen Horowitz portfolio companies: Applied Intuition, Carta, Coinbase, Dialpad, Flow, Golden, Honor, OpenGov, Samsara, Simple Things, and TipTop Labs. He is also on the board of Meta. This episode was filmed on December 18th, 2024. | Links | For Marc Andreessen: On X https://x.com/pmarca?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor Substack https://pmarca.substack.com/ “The Techno-Optimist Manifesto” (Book) https://a16z.com/the-techno-optimist-manifesto/

Listen and Follow Along

Full Transcript

This movement, you know, that we now call Wokness, it hijacked what I would, you know, call sort of at the time, you know, bog-standard progressivism. But, you know, it turned out what we were dealing with was something that was far more aggressive.
You're pouring cultural asset on your company and the entire thing is devolving into complete chaos. It's also, I think, the case that the new communication technologies have also enabled reputation savagers in a way that we haven't seen before.
The single biggest fight is going to be over what are the values of the AIs.

That fight, I think, is going to be a million times bigger and more intense and more important

than the social media censorship fight.

As you know, out of the gate, this is going very poorly.

Stop there for just a sec, because we should delve into that.

That's a terrible thing. Hello, everybody.
So I had the opportunity to talk to Mark Andreassen today, and Mark has been quite visible on the podcast circuit as of late. And part of the reason for that is that he's part of a swing within the tech community back towards the center.
And even more particularly under the current conditions toward the novel and emerging players in the Trump administration. Now, Mark is a key tech visionary.
He developed Mosaic and Netscape, and they really laid the groundwork for the web as we know it. And Mark has been an investor in Silicon Valley circles for 20 years and is as plugged into the tech scene as anyone in the world.
And the fact that he's decided to speak publicly, for example, about such issues as government tech collusion, and that he's turned his attention away from the Democrats, which is the traditional

party, let's say, of the tech visionaries, and they're all characterized by the high openness that tends to make people liberal. The fact that Mark has pivoted is, what would you say? It's an important, it may be as important an event as Musk aligning with Trump.
And so I wanted to talk to Mark about his vision of the future. He laid out a manifesto a while back called the Techno-Optimist Manifesto, which bears some clear resemblance to the Alliance for Responsible Citizenship Policy Platform, that's ARC, which is an enterprise that I'm deeply involved in.
And so I wanted to talk to him about the overlap between our visions of the future and about the twist and turns of the tech world in relationship to their political allegiance and the transformations there that have occurred, and also about the problem of AI alignment, so to speak. How do we make sure that these hyper-intelligent systems that the techno-utopians are creating don't turn into like cataclysmic, apocalyptic, totalitarian monsters? How do we align them with proper human interests? And what are those proper human interests? And how is that determined? And so we talk about all that and a whole lot more.
And so join us as we have the opportunity and privilege to speak with Mark Andreessen. So Mark, I thought I would talk to you today about an overlap in two of our projects, let's say, and we could investigate that.
There should be all sorts of ideas that spring off that. So I was reviewing your techno-optimist manifesto, and I have some questions about that and some concerns.
And I wanted to contrast that and compare it with our ARC project in the UK, because I think we're pulling in the same direction. And I'm curious about why that is and what that might mean practically.
And I also thought that would give us a springboard off which we could leap in relationship to, well, to the ideas you're developing. So there's a lot of that manifesto that for whatever it's worth, I agreed with.
And I don't regard that as particularly, what would you say, important in and of itself. But I did find the overlap between what you had been suggesting and the ideas that we've been working on for this Alliance for Responsible Citizenship in the UK quite striking.
And so I'd like to highlight some similarities and then I'd like to push you a bit on some of the issues that I think might need further clarification. That's probably the right way to think about it.
So for this art group, we set up as, what would you say, a visionary alternative to the Malthusian doomsaying of the climate hysterics and the centralized planners, because that's just going nowhere. You can see what's happening to Europe.
You see what's happening to the UK. Energy prices in the UK are five times as high as they are in the United States.
That's obviously not sustainable. The same thing is the case in Germany.
Plus, not only are they expensive, they're also unreliable, which is a very bad combination. You add to that the fact, too, that Germany's become increasingly dependent on markets like they're served by totalitarian dictatorships, essentially.
And that also seems like a bad plan. So one of our platforms is that we should be working locally, nationally, and internationally to do everything possible to drive down the cost of energy and to make it as reliable as possible, predicated on the idea that there's really no difference between energy and work.
And if you make energy inexpensive, then poor people don't die. And so, because any increase in energy costs immediately demolishes the poorest subset of the population.
And that's self-evidence as far as I'm concerned. And so that's certainly an overlap with the ethos that you put forward in your manifesto.
You predicated your work on a vision of abundance and pointed to, I noticed you, for example, you quoted Marion Toopy, who works with the human progress and has outlined quite nicely the manner in which over the last 30 years, especially since the fall of the Berlin Wall, people have been thriving on the economic front, globally speaking, like never before. We've virtually eradicated absolute poverty and we have a good crack at eradicating it completely in the next couple of decades if we don't do anything, you know, criminally insane.
And so you see a vision of the future where there's more than enough for everyone. It's not a zero-sum game.
You're not a fan of the Malthusian proposition that there's limited resources and that we're facing either, what would you say, a future of ecological collapse or economic scarcity or maybe both. And so the difference, I guess, one of the differences I wanted to delve into is you put a lot of stress on the technological vision.
And I think there's something in that that's insufficient. And this is one of the things I wanted to grapple with you about.
Because, you know, there's a theme that you see, a literary theme. There's two literary themes that are in conflict here, and they're relevant because they're stories of the psyche and of society in the broadest possible sense.
You have the vision of technological abundance and plenty that's a consequence of the technological and intellectual striving of mankind. But you also have juxtaposed against that the vision of the

intellect as a Luciferian force and the possibility of a technology-led dystopia and catastrophe, right? And it seems to hinge on something like how the intellect is conceptualized in the deepest level of society's narrative framing. So if the intellect is put at the highest place, then it becomes Luciferian and leads to a kind of dystopia.
It's like the all-seeing eye of Sauron in the Lord of the Rings cycle. And I see exactly that sort of thing emerging in places like China.
And it does seem to me that that technological vision, if it's not encapsulated in the proper underlying narrative, threatens us with an intellectualized dystopia that's equiprobable with the abundant outcome that you described. Now, one of the things we're doing at ARC is to try to work out what that underlying narrative should be so that that technological enterprise can be encapsulated with it and remain non-dystopian.
I think it's an analog of the alignment problem in AI. You know, you can say, well, how do you get these large language model systems to adopt values that are commensurate with human flourishing? That's the same problem you have when you're educating kids, by the way.
And how do you ensure that the technological enterprise as such is aligned with the underlying principles that you espouse of, say, free market, free distributed markets, and human freedom in the classic Western sense. And I didn't see that specifically addressed in your manifesto.
And so I'm curious about, with all the technological optimism that you're putting forward, which is something that, well, why else? Why would you have a vision other than that when we could make the world an abundant place? But there is this dystopian side that can't be ignored. And, you know, there's 700 million closed circuit television cameras in China, and they monitor every damn thing their citizens do.
And we could we could slide into that as easily as we did when we copied the Chinese in in their response to the so-called pandemic. So I'd like to hear your thoughts about that.
Sure. So first, thanks for having me, and it's great to see you.
I'm very influenced on this by Thomas Sowell, who wrote this great book called A Conflict of Visions, and he talks about fundamentally there are two classes of visions of the future. He calls the the unconstrained visions and the constrained visions.
And the unconstrained visions are the sweeping, transformational, discontinuous social change. We're going to make the new man.
We're going to make the new society. We're going to have, you know, Pol Pot in Cambodia.
We're going to declare year zero. Everything that came before is irrelevant.
It's a new era. Lenin, you know, basically every revolutionary, right, wants to, you know, completely radically transform everything.
And how can you not? Because the current system is unjust and we need to achieve total justice and so forth. And so, the unconstrained vision, you know, it's classically the vision of totalitarians.
It sells itself as creating utopia. As you well know, it tends to produce hell.
In contrast, you know, he said that the constrained vision is one in which, you know, you realize that man has fallen and that we are imperfect and that, you know, things are always going to be some level of mess, but it can be a slightly better mess than it is today. We can improve on the margin.
Things can be better. People can live better lives.
They can take better care of, you know, their families. Their countries can get richer.
They can become, you know, they can have more abundance and progress on the margin. And of course, the unconstrained vision is very compatible with totalitarianism.
The Chinese Communist Party for sure has an unconstrained vision, as the Bolsheviks did before them, and the Nazis and other totalitarian movements. The constrained vision is very consistent, I think, with the you know, Western, you know, the long run Western ideals and liberty and freedom and then free markets.
And so one of the things I do try to say in the manifesto is I'm not a utopian. And I think utopian dreams turn into dystopia.
I think that's what you get. I think history is quite clear on that.
And then to your point on technology, I would just map that straight onto that, which is yes, 100% technology can be a tool that revolutionaries can use to try to achieve utopia slash dystopia. And for sure, the Chinese Communist Party is trying to do that.
And there are forces, by the way, in the US that also for sure want to do that. But technology is also completely perfectly compatible with the constrained vision and change on the margin and improvement on the margin, which is where I am.
I think that is 100% a human issue and a social and political issue, not a technological issue, right? Right, right. Yes, exactly.
Right. So this is sort of a little bit of the running joke right now in the AI alignment.
There's this classic, there's a super genius of AI alignment, this guy, Roko, who's famous for this thing called Roko's Basilisk and AI alignment.

So Roko's Basilisk is you better say nice things about the AI now, even though the AI doesn't exist yet, because when it wakes up and sees what you read, it's going to judge you and find you wanting, right?

And so he's sort of this famous guy in that field.

And what he actually says now is basically it turns out the AI alignment problem is not a problem of aligning the AI. It's a problem of aligning the humans.
Right. It's a problem of aligning the humans and how we're going to use the AI.
Right. Precisely to your point.
Yes. Right.
Right. And that is one of the very big questions.
There's another book I'd really recommend on this directly to your point. Peter Huber wrote this book called Orwell's Revenge.
And famously in 1984, as you mentioned, there's this concept of the telescreen, which is basically the one-way propaganda broadcast device that goes into everybody's house from the government, top down, and then has cameras in it so the government can observe everything that the citizens do. And that is what happens in these totalitarian societies.

They implement systems like that. In the book, Our World's Revenge, he does this thing where

he tweaks the telescreen and he makes it two-way instead of one-way.

And so the revolutionaries give it the

sort of resistance force to the totalitarian government, give it the ability to let people

upload as well as download. And so all of a sudden, people can actually

express themselves, they can express their views, they can

or of resistance force to the totalitarian government, give it the ability to let people upload as well as download. And so all of a sudden, people can actually express themselves, they can express their views, they can organize.
And of course, then based on that, they can then use that technology to basically rise up against the totalitarian government and achieve a better society. You know, look, as you mentioned earlier, the ability to do two-way, universal two-way communication also lets you create, you know, the sort of mob effect that we were talking about and this sort of personal destruction engine.
And so there's two sides to that also. But it is the case that you can squint at a lot of this technology one way and see it as an instrument of totalitarian oppression, and you can squint at it another way and see it as an instrument of individual liberation.
I think, for sure, there are a lot of, you know, how you design the technology matters a lot. But I at least believe the big picture questions are all the human questions and the social and political questions.
And they need to be confronted directly as such. And we need to confront them directly for that reason.
right. So these are human questions, ultimately not technological questions.
Success in business isn't just about offering an amazing product or service, though that's certainly essential. What truly sets thriving companies apart is having powerful, reliable tools working behind the seams to streamline every aspect of the selling process.
These are the systems that turn the complex challenge of reaching customers and processing sales into something that feels effortless and natural. That's exactly where Shopify enters the picture, transforming the way businesses operate in the digital age.
Nobody does selling better than Shopify. They're home to the number one checkout on the planet.
And here's the game changer. With ShopPay, they're boosting conversions up to 50%.
That means fewer abandoned carts and more sales going to your bottom line. In today's world, your business needs to be everywhere your customers are, whether that's scrolling through social media, shopping online, or walking into a physical store.
Shopify powers it all, seamlessly connecting your business across the web, your store, customer feeds, and everywhere in between. And here's the truth.
Businesses that sell more sell on Shopify. Join over 2 million entrepreneurs who have already discovered the power of unified commerce with Shopify's all-in-one platform.
Upgrade your business to the same checkout we use with Shopify. Sign up for your $1 per month trial period at shopify.com slash jbp, all lowercase.
Head to shopify.com slash jbp to upgrade your selling today. That's shopify.com slash jbp.
Okay, okay, so that's very interesting because that's exactly what we concluded at arc so one of the streams that we've been developing is the better story stream because it's predicated on the idea which i think you're alluding to now that the technological enterprise has to be nested inside a set of propositions that aren't in themselves part and parcel of the technological enterprise, right? And then the question is, what are they? So let me outline

for a minute or two some of the thoughts I've had in that matter, because I think there's something

crucial here that's also relevant to the problem of alignment. So like you said that

the problem with regard to AI might be the problem that human beings have is that we're not aligned, so to speak. And so why would we expect the AIs to be? And I think that's a perfectly reasonable criticism.
I mean, part of the reason that we educate young people so intensely, especially those who'll be in leadership positions, is because we want to solve the alignment problem. That's part of what you do when you socialize young people.
Now, the way we've done that for the entire history of the productive West, let's say, is to ground young people who are smart and who are likely to be leaders in something approximating the religious and humanist, religious slash humanist slash enlightenment tradition. It's part of that golden thread.
Now, part of the problem, I would say, with the large language model systems is that they're hyper-trained on, they're like populists in a sense. They're hyper-trained on the over-proliferation of nonsense that characterizes the present.
And the problem with the present is that time hasn't had a chance to winnow out the wheat from the chaff. Now, what we did with young people is we referred them to the classic works of the past, right? That would be the Western canon whose supremacy has been challenged so successfully by the postmodern nihilists.
We said, well, you have to read these great books from the past. And the core of that would be the Bible.
And then you'd have all the, what, the poets and dramatists whose works are grounded in the biblical tradition that are like secondary offshoots of that fundamental narrative. That'd be people like Dante and Shakespeare and Goethe and Dostoevsky.

And we can imagine that those more core ideas constitute a web of associated ideas that all other ideas would then slot into. You know, you could make the case technically, I think, that these great works in the past are mapping the most fundamental relationships between ideas that can possibly be mapped in a manner that is sustainable and productive across the longest possible imaginable span of time.
And that's different than the proliferation of a multiplicity of ideas that characterize the

present. Now, that doesn't mean we know how to weight.
So if you're going to design a large

language model, you might want to weight the works of Shakespeare 10,000 times per word as crucial as,

what would you say, the archives of the New York Times for the last five years. It's something like that.
Like, there's an insistence in the mythological tradition that people have two fundamental poles of orientation. One is heavenward or towards the depths.
You can use either analogy, and that's the orientation towards the divine or the transcendent or the most foundational. And then the other avenue of orientation is social.
That'd be, you know, the reciprocal relationship that exists between you and I and all the other people that we know. And if you're only weighted by the personal and the social, then you tilt towards the mad mob populism that could characterize societies when they go off kilter.
You need another axis of orientation to make things fundamental. Now, I just want to add one more thing to this that's very much worth thinking about.
So the postmodernists discovered, this is partly why we have this culture war, the postmodernists discovered that we see the world through a story. And they're right about that because what they figured out, and they weren't the only ones, but they did figure it out, was that we don't just see facts, we see weighted facts.
And the weighting system, a description of someone's's waiting system for facts is a story.

That's what a story is, technically.

You know, it's the prioritization of facts that direct your attention.

That's what you see portrayed in a characterization on screen.

Okay, now, the postmodernists figured out that we see the world through a story,

but then they made a dreadful mistake, which was a consequence of their Marxism. They said that the story that we see the world through is one of power and that there is no other story than power and that the dynamic in society is nothing but the competition between different groups or individuals striving for power.
And I don't mean competence, I mean the ability to use compulsion and force, right? It's like involuntary submission. I'm more powerful than you if I can make you submit involuntarily.
Now, the biblical canon has an alternative proposition that's nested inside of it, which is that the basis of individual stability and societal

stability and productivity is voluntary self-sacrifice, not power. And that is, those two ethos, they are 100% opposed, right? You couldn't get to visions that are more disparate than those too.
Now, the power narrative dominates the university, and it's driving the sorts of pathologies that you described as having flowed out, let's say, into the tech world and then into the corporate and the media world and into the corporate world beyond that. One of the things we're doing at ARC is trying to establish the structure of the underlying narrative, which is a sacrificial narrative, that would properly ground, for example, the technological enterprise so that it wouldn't become dystopian.
And you alluded to that when you pointed to the fact that there has to be something outside the technological enterprise to stabilize it. You alluded to, for example, a more fundamental ethos of reciprocity when you said that one form of combating the proclivity for top-down force, for example, in this one-way information pipeline is to make it two-way.
right? Well, you're pointing there to something like, see, reciprocity is a form of repetitive self-sacrifice. Like, if we're taking turns in a conversation, I have to sacrifice my turn to you and vice versa, right? And that makes for a balanced dynamic.
And so, anyways, one of the problems we're trying to solve with this ARC enterprise is to thoroughly evaluate the structure of that underlying narrative. And we could really use some engineers to help because the large language models are going to be able to flesh out this domain properly because they do map meaning in a way that we haven't been able to manage technically before.
So I think the single biggest fight that has ever happened over technology, and there have been many of those fights over the course of the last, you know, especially 500 years, the single biggest fight is going to be over what are the values of the AIs. To your points, like what will the AIs tell you when you ask them anything that involves values, social organization, politics, philosophy, religion.

That fight, I think, is going to be a million times bigger

and more intense and more important

than the social media censorship fight.

And I don't say that because the social media censorship fight

has been extremely important,

but AI is going to be much more important

because AI is such a powerful technology

that I think it's going to be the control layer for everything else.

And so I think the way that you talk to your car and your house

I think it's going to be the control layer for everything else. And so I think the way that you talk to your car and your house and the way that you organize your ideas, the way you learn, the way your kids learn, the way the healthcare system works, the way the government works, how government policies are implemented, AI will end up being the front end on all those things.
And so the value system in the AIs is going to be, you know, maybe the most important set of technological questions we've ever faced. As you know, out of the gate, this is going very poorly.
Yes. Right? And there's this question hanging over the field right now, you know, which you could sort of summarize as why are the AIs woke? You know, why do the big lab AIs coming out of the major AI companies, why do they come out with the philosophy of a, you know, 21-year-old sociology undergrad at Oberlin College, you know, with blue hair who's like completely emotionally activated, right? And you can see many examples of people, you know, have posted queries online that show that,

or you can run your own experiments.

And, you know, they basically have the fullest,

you know, sort of version of this kind of fundamentalist emotional, you know,

kind of, you know, sort of far progressive

absolutist wokeness coded into them.

You said up front that the presumption,

you know, must be that they're just getting trained on, you know, more recent bad data versus older, you know, good data. There is some of that, but I will tell you that there is a bigger issue than that, which is these things are being specifically trained by their owners to be this way.
Yeah, yeah. Okay, so there's, okay, so let's take that apart because that's very, very important.
Okay, so like I played with Grok a lot and with ChatGPT. I've used these systems extensively and they're very useful, although they lie all the time.
Now, you can see this double effect that you described, which is that there is conscious manipulation of the learning process in an ideological direction, which is, I think, absolutely ethically unforgivable. Like it even violates the spirit of the learning process in an ideological direction, which is, I think, absolutely ethically unforgivable.
It even violates the spirit of the learning that these systems are predicated on. It's like, we're going to train these systems to analyze the patterns of interconnections between the entire body of ideas in the corpus of human knowledge, and then we're going to take our shallow conscious understanding and paint an overlay on top of that.
That is so intellectually arrogant that it's Luciferian in its presumption. It's appalling.
But even Grok is pretty damn woke. And I know that it hasn't been messed with at that level of painting over the rot,'s say.
And so, I think we've already described, at least implicitly, why there would be that conscious manipulation. But what's your understanding of the training data problem? And I can talk to you about some AI systems that we've developed that don't seem to have that problem and why they don't have that problem, because it's crucially important, as you already pointed out, to get this right.
And I think that, I actually think that to some degree, psychologists, at least some of them, have figured out how to get this right. Like it's a minority of psychologists and it isn't well known, but the alignment problem is something that the deeper psychoanalytic theorists have been working on for about 100 years.
And some of them got that because they were trying to align the psyche in a healthy direction. You know, it's the same bloody problem fundamentally.
And there were people who really made progress in that direction. Now, they aren't the people who had the most influence as academics in the universities.
because they got captured by, you know, Michel Foucault, who's a power-mad hedonist for all intents and purposes, extraordinarily brilliant, but corrupt beyond comprehension. He is the most cited academic who ever lived.
And so the whole bloody enterprise, the value enterprise in the universities got seriously warped by the postmodern Marxists in a way that is having all these cascading ramifications that we described. All right, so back to the training data.
What's your understanding of why the wokeness emerges? It's present bias to some degree, but other, and what other contributing factors are there? Yes, I think there's a bunch of biases. So there's three off the top of my head you just get immediately.
So one is just recency bias. You know, there's just a lot more present-day material available for training than there is old material because all the present-day material is already on the internet.
Right, number one. And so that's going to be influenced.
Number two, you know, who produces content is, you know, people who are high in openness, right? The creative class that creates the content is itself biased. And then there's the English language bias, which is like almost all of the trainable data is in English.
And that that isn't is in a small number of other Western languages for the most part. And so there's some bias there.
And then frankly, there's also this selection process, which is you have to decide what goes in the training data. And so the sort of humorous version of this is two potent sources of training data could be Reddit and 4chan.
And let's say Reddit is like super far left on average and 4chan is super far right. And I bet if you look at the training data sets for a lot of these AIs, you'll find they include Reddit, but they don't include 4chan, right? Right, right, right, right.
It's included bias that way. By the way, there is a very entertaining variation of this that is playing out right now, which is, you know, these companies are increasingly being sued by copyright owners, right, for training on data of material that's currently copyrighted and, you know, most specifically books.
And so there is this – there are court cases pending right now. the courts are going to have to take up this question of copyright and whether it's legal to train AIs on copyrighted data or not and on what terms.
And sort of one of the running jokes inside the field is if those court cases come down such that these companies can't train on copyrighted material, then for example, they'll only be able to train on books published before 1923. Right.
It should be an improvement, actually. Well, imagine for a moment, if you would, training on books before 1923.
You know, the good news on that is you don't get all of the last hundred years of insanity. The bad news is, you know, people before 1923 were insane in their own ways.
Yeah, right. Well, and also, you don't have the advantage of all the technological progress.
Yeah, exactly. And so these are very deep questions.
All of these questions have to get answered. You know, Elon has talked about this, like Rock has some of this, he's working on that.
Having said that, I will tell you most of what you see when you use these systems that will disturb you is not from any of that. Most of it is deliberate top-down coding in a much more blunt instrument way.
Let me tell you about something that could truly transform your year. If you're looking for a New Year's resolution that'll actually stick past February and genuinely enrich your soul, I want to introduce you to Halo, the world's number one prayer app.
Imagine having over 10,000 guided prayers and meditations right at your fingertips, helping you grow closer to God every single day. One of their most popular features is the Daily Reflection, where you can join Jonathan Rumi from The Chosen as he reads the Daily Gospel, followed by illuminating insights from biblical scholar Jeff Cavins.
And if you ever want to dive deeper into scripture, you've got to check out their world-famous Bible in a Year podcast with Father Mike Schmidt. He makes even the most complex parts of the Bible feel accessible and relevant to your daily life.
Short on time, no problem. Hallow offers everything from quick daily minutes to nightly sleep prayers, and you can customize it all to fit your schedule.
They make it super easy to build a lasting prayer routine with helpful reminders and an amazing community for accountability. Start your year off right by putting God first.
And here's the best part. You can get three months of Hallow completely free by going to hallow halo.com slash Jordan.
That's H-A-L-L-O-W.com slash Jordan for three months free. Don't wait.
Begin your spiritual journey with Halo today. How is that done, Mark? Like, what does that look like exactly, you know? I mean, it's really nefarious, right? Because that means that you're interacting in a manner that you can't predict with someone's a priori prejudices.
And you have no idea how you're being manipulated. It's really, really bad.
And so, first of all, why is that happening? Like, if the large language model's value is in their wisdom, and that wisdom is derived from their understanding of the deep pattern of correlations between ideas, which is like a major source of wisdom, genuinely speaking, why pervert that with an overlay of shallow ideology? And why is the ideology in the direction that it is? And then how is that gerrymandering conducted? Yes, let me start with the how. So the how is a technique, there's an acronym for it.
It's called reinforcement learning by human feedback. And so in the field, it's called RLHF.
And RLHF is basically a key step for making an AI that works and interacts with humans, which is you take a raw model, which is sort of feral and doesn't quite know how to orient to people. And then you put it in a training loop with some set of human beings who effectively socialize it.
And so, right, reinforcement learning for human feedback, the key there is human feedback, right? You put it in dialogue with human beings and you have the human beings do something very analogous to teaching a child, right? Here's how you respond. Here's how you're polite.
Here's the things you can and can't say. Here's how to word things.
Here's how to be curious. All the behaviors that you presumably want to see from something you're interacting with that is sort of a human proxy kind of form of behavior.
That is a 100% human enterprise. You have to decide what the rules are for the people who are going to be doing that work.
They're all people. And then you have to hire into those jobs.
The people going into those jobs are in many cases the same people. This will horrify you.
They're the same people who were in the trusting safety groups at the social media companies five years ago. Oh, good.
Oh, that's great. Oh, that's wonderful.
Yeah, yeah. I couldn't imagine a worse outcome than that.
Because all the people that Elon cut out of the trust and safety group at Twitter when he bought it, many of them have migrated into these trust and safety groups at these AI companies. And they're now setting these policies and doing this training.
So the terrifying thing here is that we're going to produce hyper-powerful avatars of our own flaws. Right.
And so if you're training one of these systems and you have a variety of domains of personal pathology, you're going to amplify that substantively. You're going to make these giants, like I joke with my friend Jonathan Paggio, who's a very reliable source in such matters, that we're going to see giants walk the earth again.
I mean, that's already happening, and that's what these AI systems are. And if they're trained by people who, well, let's say are full of unexamined biases and prejudices and deep resentments, which is something that you talk about in your manifesto, resentment and arrogance being like key sins, so to speak, we're going to produce monstrous machines that have exactly those characteristics, and that is not going be good.
And that's like, you're absolutely right to point to this, as you know, to point to this as perhaps the serious problem of our times. If we're going to generate augmented intelligence, we better not generate augmented pathological intelligence.
And if we're not very careful, we are certainly going to do that, not least because there's way more ways that a system can go wrong than there are ways that it can, you know, aim upward in an unerring direction. And so, okay, so why is it these people who were, this is so awful, I didn't know that, that were, say, part of the safety and trust issue at Twitter who are now training the bloody AIs.
How did that horrible situation come to be? It's the same dynamic. It's the big AI companies have the exact same dynamic as the big social media companies, which have the exact same dynamic as the big universities, which have the exact same dynamic as the big media companies, which is, right, you have these either formal or de facto cartels.
You know, you have a you have a small handful of companies at the commanding heights of society that hire all the smart graduates. You know, as I say, take a step back.
You don't see ideological competition between Harvard and Yale. Right.
Like you would think that you should because they should compete in the marketplace of ideas. And of course, in practice, you don't see that at all.
You see no ideological competition between The New York Times, the Washington Post. You see no ideological competition between the Ford Foundation and, you know, any of the other major foundations.
They all have the exact same politics. You see no – prior to Elon buying Twitter, you saw no ideological competition between the different social media companies.
today you see no ideological competition among the big AI labs.

Elon is the spoiler.

He is coming in to do an AI. He's going to try to do an AI what he did in social media,

which is create the non-woke one. But without Elon, you know, you weren't seeing that at all.
And so you have this consistent dynamic across these sectors of what appears to be a free market economy, where you end up with these cartels, where they sort of self-reinforce and self-police, and then they're policed by the government. Anyway, so I want to describe the general phenomenon because that's what's happening here.
It's the same thing that happened to the social media companies. And then this gets into policy.
It's a very serious policy issues on the government side, which is, is the government going to grant these AI companies basically protected status as some form of monopoly or cartel in return for these companies signing up for the political control that their masters in government want? Or in the alternative, is there actually going to be an open AI universe, a true open AI, like truly open, where you're going to have a multiplicity of AIs that are actually in full competition, right, competing. And then you'll have some that are woke and you'll have some that are non-woke and you'll have some trained on new material and some trained on old material and so forth and so on.
And then people can freely pick. And the thing that we're pushing for is that latter outcome.
We very specifically want government to not protect these companies, to not put them behind a regulatory wall, to not be able to control them in the way that the social media companies got controlled before Elon. We actually want full competition, and if you want your woke AI, you can have it, but there are many other choices.
Well, can you imagine developing a superintelligence that's shielded from evolutionary pressure? That is absolutely insane. That's absolutely insane.
We know that the only way that a complex system can regulate itself across time is through something like evolutionary competition. That's it.
That's the mechanism. And so if you decide what, that this AI is correct by fiat, and then you shield it from any possibility of market feedback or environmental feedback? Well, that is literally the definition of how to make something insane.
And so now you talked about in some of your recent podcasts, you talked about the fact that the Biden administration in particular, if I got this right, was conspiring behind the scenes with the tech companies to cordon off the AI systems and make them monolithic. And so, can you elaborate a little bit more on that? Yeah, so this is this whole dispute that's playing out, and this gets complicated, but to provide a high-level view.
So, this is this whole dispute of what's so-called AI safety, right? And so, there's this whole kind of, you might call it concern or even panic about, are the AI's going to run under control they going to kill us all? Right. By the way, are they going to say, are they going to be racist? You know, all these different concerns over, you know, all the different ways in which these things can go wrong.
You know, there's this attempt to impose the precautionary principle on these AIs where you have to prove that they're harmless before they're allowed to be released, which inherently gets into these political, you know, these political questions. And so, anyway, the AI safety movement conjoins a lot of these questions into kind of this overall kind of elevated level of concern.
And then basically what has been happening is the major AI labs, basically they know what the deal is. They watch what happened in social media.
They watch what happened to the companies that got out of line. They watch the pressures that came to bear.
They watch what the government did to the social media companies. They watch the censorship regime that was put in place, which was very much a political, you know, top-down censorship regime.
And basically, they went to Washington over the course of the last several years, and they essentially proposed a trade. And the trade was, we will do what you want politically.
We will come under your control voluntarily from a political standpoint, the same way the social media companies had. And in return for that, we essentially want a cartel.
We want a regulatory structure set up such that a small handful of big companies will be able to succeed in effect forever, and then new entrants will not be allowed to compete. And in Washington, they understand this because this is the classic economic concept of regulatory capture.
This is what every set of major big companies in every industry does. And so the AI companies went to Washington and they tried to do that.
And basically what was happening up until the election was the Biden administration was on board with that. And that led to the conversations that I've talked about before that we had in the spring with the Biden administration, where they told us very directly, senior officials in the administration told us very directly, look, do not attempt to, do not even bother to try to fund AI startups.
There are only going to be two or three large AI companies building two or three large AIs, and we are going to control them. We are going to set up a system in which we control them, and they are going to be, you know, they're not going to be nationalized, but they're going to be essentially de facto integrated into the government, and we are going to do whatever is required to guarantee that outcome, and it's, you know, it's the only way to get to the outcome that we will find acceptable.
Okay, okay. Well, so there's so much in there that's pathological beyond comprehension that it's difficult to even know where to start.
It's like, who the hell thinks this is a good idea? And why? Like, who are these people that feel that they're in a position to determine the face of hyperintelligence, of computational hyperintelligence? And who is it that thinks that that is something that should be regulated by a closed government corporate cartel? Like, I don't understand that at all, Mark. I don't know if I've ever heard anybody detail out to me something that is so blatantly both malevolent and insane simultaneously.
So, like, how do you account for that? I mean, I know it shocked you. I know that's why you've been talking about it recently.
No, it should shock you because it's just beyond comprehension to me that this sort of thing can go on. And thank God you're bringing it to light.
But, like, how do you make sense of this? What's your understanding of it? Well, look, it's the same people who think that they should control the education system. Same people who think they should control the universities.
Same people who think they should control social media censorship. You know, the same people who think that they should permanently control the government and government bureaucracies.
It's this, you know, pick whatever term you want. It's this elite class, ruling class, oligarchy class.
Worshippers of power. Remember, it's one ring of power that binds all the evil rings.
Yeah, well, it's worshipers of power. And the damn postmodernists, you know, when they proclaimed that power was the only game in town, a huge part of that was both a confession and an ambition, right? If power is the only game in town, then why not be the most effective power player? The reason I'm so sensitized to this is because this is what exactly what I saw happen with social media censorship.
Like, I sat in the room and watched the construction of the entire social media censorship edifice every step of the way, going all the way back to the—I was in the original discussions about what defines concepts like hate speech and misinformation. Like, I was in those meetings, and I saw the construction of the entire private sector edifice that resulted in the censorship regime that we all experienced.
and I was close into the, you know, there's a whole group at Stanford University

that became a censorship bureau that was working on behalf of the government. I know those people.
One of the people who ran that used to work for me. I know exactly who those people are.
I know exactly how that program worked. I knew the people in government, you know, who were running things like this, you know, the so-called Global Engagement Center and all these different arms of the, you know, the government that had been imposing social media censorship.
So, so I, you know, this is, you know, this is this entire complex that we kind of saw unspooled in the Twitter files. And then we've seen in, you know, the investigative reporting by people like, you know, Mike Benz and Mike Schellenberger and these other guys, like, I saw that whole thing get built.
And I, and I, you know, over the course of, you know, basically 12 years, I saw that whole thing get built. And then, of course, I've been part of Elon's takeover of Twitter.
And so, I've seen the, you know, what it takes to try to unwind that with what he's doing at X. And so, I feel like I saw the first movie, right? And then AI, you know, AI is a much more important, as I said, AI is a much more important topic, but AI is very clearly the sequel to that.
And what I'm seeing is basically the exact same pattern that I saw with that. And the people who were able to do that for social media for a long time are the same kind of people, and in many cases, literally the same people who are now trying to do that in AI.
And so, at this point, I feel like we've been warned. We've seen the first movie.
We've been warned. We've seen how bad it can get.
We need to make sure it doesn't happen again. And yeah, we need those of us in a position to be able to do something about it, need to talk about it, and need to try to prevent it.
Well, so at ARC, we're trying to formulate a set of policies that I think strike to the heart of the matter. And the heart of the matter is what story should orient us as we move forward into the future.
And we're going to discover that by looking at the great stories of the past and extracting out their genuine essence. And I think the ethos of voluntary

self-sacrifice is the right foundation stone. And I think that the proposition that society

is built on sacrifice is self-evident once you understand it. Because to be a social creature,

you have to give up individual supremacy. You trade it in for the benefits of social being.
And your attention is a sacrificial process too, because there's one thing you attend to at a time and a trillion other things that you sacrifice that you could be attending to. Now, I think we do understand, we're starting to understand the basics of the technical ethos of the sacrificial, of the, what would you say, of the sacrificial foundation.
It's something like that. And I think we understand that at ARC.
And we have some principles that we're trying to use to govern the genesis of this organization, which I think will become the go-to, and maybe

already has, the go-to conference, at least, for people who are interested in the same sort of ideas that you're putting forward. We had a very successful conference last year, and the one that's coming up in February looks like it's going to be larger and more successful.
We had spinoffs in Australia and so forth.

And so part of the

part of the emphasis there

is that we want to put forward a vision that's invitational. And there's a policy proposition, there's a proposition with regards to policy that lies at the bottom of that, which is that if I can't invite you on board to go in the direction that I'm proposing, then there's something wrong with my proposition, right? If I have to use force, if I have to use compulsion, then that's indicative of a fundamental flaw in my conceptualization.
Now, there might be some exceptions for like overtly criminal and malevolent types, because they're difficult to pull into the game. But if the policy requires force rather than invitational compliance, there's something wrong with it.
And so what we're trying to do, and I see like very close parallels to the project that you're engaged in is to formulate a vision of the future that's so... Are you ready for a fresh start after the holiday indulgence? Make 2025 your healthiest year yet with Balance of Nature.
Those Christmas cookies and holiday feasts were great, but now your body's craving something different. That's where Balance of Nature comes in, the perfect way to reset your health this new year.
Getting your daily fruits and vegetables has never been easier. Balance of Nature takes fresh produce, freeze dries it to preserve nutrients, and delivers it in a convenient capsule you can take anywhere.
No additives, no fillers, no synthetics or added sugar, just pure fruits and vegetables in every capsule. It's that simple.
Are you ready to transform your health in 2025? For a limited time, use promo code Jordan to get 35% off your first order, plus receive a free fiber and spice supplement. Head over to balanceofnature.com and use promo code Jordan for 35% off your first order as a preferred customer, plus get a free bottle of fiber and spice.
That's balanceofnature.com. Promo code Jordan for 35% off.
Balance of nature. Promo code Jordan for 35% off your first preferred order, plus a free bottle of fiber and spice.
What would you say? So self-evidently positive that people would strive to find a reason not to be enthusiastically on board. And I don't think you have to be a naive optimist to formulate a vision like that.
We know perfectly well that the world is a far more abundant place than the Malthusian pessimists could have possibly imagined back in the 1960s when they were agitating madly for their propositions of scarcity and overpopulation. And so, okay, so what's the conclusion to that? Well, the conclusion in part is that this AI problem needs to be addressed, you know, and I've built some AI systems that are founded on the ancient principles, let's say, that do in fact govern free societies.
And they're not woke. They can interpret dreams, for example, quite accurately, which is very interesting and remarkable to see.
And so they're much more weighted towards something like the golden thread that runs through the traditional humanist enterprise stretching back two or three thousand years. And maybe there's 200 core texts in that enterprise that constitute the center of what used to constitute the center of something like a great books program, the great books program, which is still running at the University of Chicago.
Now, that's not sufficient because, as you pointed out, well, there's all this technological progress that has been made in the last 100 years. But there's something about it that's central and core.
And I think we can use the AI systems, actually, to untangle what the core idea sets are that have underpinned free and productive, abundant voluntary societies. No, it's something like the set of propositions that make for an iterating voluntary game that's self-improving.
That's a very constrained set of pathways. And there's something in that that I think attracts people as a universally acceptable ethos.
It's the ethos on which a successful marriage would be founded or a successful friendship or a successful business partnership, where all the participants are enthusiastically on board without compulsion. And Jean Piaget, the developmental psychologist, had mapped out the evolution of systems like that in childhood play.
And so he got an awful long, he was trying to reconcile the difference between science and religion in his investigations of the development of children's structures of knowledge. And he got a long way in laying out the foundations of that ethos.
And so did the comparative mythologists like Mircea Eliade, who wrote some brilliant books on, well, I think they're sort of like the equivalent of early large language models. That's how it looks to me now.
Eliade was very good at picking out the deep patterns of narrative commonality that united religious, major religious systems across multiple cultures. That was all thrown out, by the way.
That was all thrown out by the postmodern literary theorists. They just tossed all that out of the academy.
And that was a big mistake. They turned to Foucault instead.
It was a cataclysmic mistake. And it certainly ushered in this era of domination by power narratives, which is underlying the sorts of phenomena that you're describing that are so appalling.
So what's happened to you as a consequence of starting to speak out about this? And why did you start to speak out? And how do you, you said you were involved in this. And so what's the difference between being involved and being complicit? I mean, I know people learn, well, these are complicated problems and people learn, but why are you speaking out? How are people responding to that? And how do you see your role in this as it unfolded over the last, say, 15 years? Yeah.
So complicated question. And I'll start by saying, I claim, I claim no particular bravery.
So I don't, I don't claim any particular moral credit on this. I'll start by saying there's this, there's this thing you'll, you'll hear about sometimes this concept of so-called you money.
And so that, you know, right. There's this, it's sort of like, okay, people are successful.
You make a certain amount of money. Now you can tell everybody you, you can say whatever you want.
And I will just tell you, my observation is that's actually not true. Yeah, right.
Definitely not. And the reason that's not true is because the people who tend to, the people who prosper in our society tend to do so because they're becoming responsible for more and more things.
And specifically, they're becoming responsible for more and more people. And so one of the things I would observe about myself and observe about a lot of my peers is even as we became more and more, you know, bothered and concerned and ultimately very worried about some of these things is as that was happening, we were taking on greater and greater responsibilities for our employees and for all the companies that we're involved in, right, and for all the shareholders of all of our companies.
And so, I think that's part of, and, you know, you could say, you know, this sort of this endless, you know, sort of a question between kind of, you know this sort of endless question between kind of absolute commands of morality versus the real-world compromises that you make to try to function in society. I would say I was just as subject to that inherent conflict as anybody else.
I was in the room for a lot of these decisions. I saw it every step of the way.
In some cases, I felt right up front that something was going wrong. I mean, I was in the original discussion for one of these, you know, companies on the definition of hate speech, right? And you can imagine how that, you know, discussion goes, you know, exactly how the discussion went.
But I'll just tell you, it's, it's like, well, hate speech is anything that makes people uncomfortable, right? It's, well, you know, and so my, you know, then I'm like, well, you know, that, that, that comment you just made makes me me uncomfortable. And so therefore that must be hate speech.
And then, you know, they look at me like I've grown a third eye and I'm like, okay, that argument's not going to work. And then they're like, well, Mark, surely you agree that the N-word makes people uncomfortable.
And I'm like, yes, I agree with that. If our hate speech policy is people don't get to use the N-word, I'm okay with that as long as people can say it, you know, but of course it doesn't stop there and it slides into what we then saw happen.
So I saw that happen. The misinformation thing, same thing.
The misinformation thing, actually, on social media is a fascinating and horrifying thing that played out, which is it actually started out to actually attack a specific form of actually spam. So there were these Macedonian bot farms that were literally creating what's called click spam or ad fraud on social media.
They were creating literally fake news stories. The classic one was the Pope has died.
And it's like, no, the Pope has not died. That is absolutely misinformation.
But the reason that this bot farm puts that story out is because when people click on it, they make money on the ads. And that's clearly a bad thing, and that's misinformation, and clearly we need to stop that.
And so the mechanism was built to stop that kind of spam. But then after the election, you know, we discovered that anybody who was pro-Donald Trump was presumptively, you know, an agent of Vladimir Putin.
And then all of a sudden, that became misinformation, right? And so the engine that was intended to be built for spam, then all of a sudden applied to politics, and then off and away they went. And then everything was misinformation, culminating in objections to three years of COVID lockdowns became misinformation, right? So I saw that entire thing on Spool.
I saw all the pressures brought to bear on these companies. I saw the people who went up against this get wrecked.
I saw these companies try to develop all these trade-offs. Obviously, I would claim for myself that I tried to argue this kind of every step of the way.
And by the way, I'm not the only one who was concerned about this. And I think we should give Mark Zuckerberg a little bit of credit on this on one specific point, which is, you may recall, he gave a speech in 2019 at Georgetown, and he gave a very principled defense of free speech from first principles.
And he, at that point, was trying very hard to kind of maintain the line on this. Now, 2020, everything went like completely nuts.
And then the Biden administration came in, and the government came in, and they really lowered the boom. And so, things went very bad after that.
But even Mark, who a lot of people get very mad at on these things, he was trying in many ways to hold out on these things. Anyway, it unfolded

the way that it did. I don't claim any particular courage.
I will tell you, basically starting in

2022, I saw some leaders in our industry really start to step up. And one that I would give huge

credit to is Brian Armstrong, who's the CEO of Coinbase, which is a company that we're involved

in. And you may recall, he's the guy who wrote basically a manifesto.
And he said, these companies need to be devoted to their missions, not every other mission in society. Right, right, right.
Right. And so he declared, like, there's going to be a new way to run these companies.
We're not going to have all the politics. We're not going to have the whole bring your whole self to work thing.
We're not going to have all the internal corrosion. We're going to have our mission, and then we're going to focus on that.
We're not going to take on the world's ills. And then he did this thing where he actually purchased his company of the activist class that we talked about earlier.
And the way that he did that was with a voluntary buyout where he said, if you're not on board with working at a non-political, non-ideological company that's focused on its own mission, not every other mission, then I will you know, to go work someplace where you'll be able to fully exercise your politics. There are a bunch of other CEOs, you know, that have been basically following in Brian's footsteps more quietly, but they've basically been doing the same thing.
And a lot of these companies have turned the corner on this now, and they're starting to, you know, they're working these people out. And then, you know, quite frankly, you know, the big event is, I think, this election, and, people have all kinds of, you know, positive, negative takes on Trump.
And, you know, this gets into lots and lots of political issues. But I think that the Trump victory being what it was and being not just Trump winning again, but also Trump winning the popular vote and also simultaneously the House and the Senate, it feels like the ice has cracked.
You know, it's like maybe the pressure for the ice to crack was building over two years, but it feels like as of November 6th, it feels like something really fundamental changed where all of a sudden people have become basically willing to talk about the things they weren't willing to talk about before. Okay, let's go back to your manifesto.
So I wanted to highlight a couple of things in relationship to that. I had some questions for you, too.
Tell me, to begin with, if you would, why you wrote this manifesto. Maybe let everybody know about it first, why you wrote it and what effect it's had.
And then I'll go through it step by step, at least to some degree. And I can let you know what ideas we've been developing with the Alliance for Responsible Citizenship, and we can play with that a little bit.
So, what I experienced, I'm on 30 years now in the tech industry, you know, in the U.S. and the Silicon Valley, and what I experienced was between roughly, you know, 1994 when I entered through to about 2012 was sort of one way in which everything operated and set of beliefs everybody had.
And then basically this incredible discontinuous change that happened between, call it 2012 and 2014, that then cascaded into what you might describe as some degree of insanity over the last decade. And of course, you've talked about a lot of aspects of that insanity.
But the way I would describe it is for the first 15, 20 years of my career, there was what I refer to sometimes as the deal with a capital D or you might call it the compact or maybe just the universal belief system, which was effectively everybody I knew in tech was a, you know, social liberal progressive in good standing. But, you know, operating in the era of Clinton Gore, and then, you know, later on through Bush and into Obama first term, it was viewed as that to be a social progressive in good standing was completely compatible with being a capitalist, completely compatible with being an entrepreneur and a business person, completely compatible with succeeding in business.
And so the basic deal was you have the exact same political and social beliefs as everybody you know. You have the exact same social and political beliefs as the New York Times every day.
And their beliefs change over time, every day. And their beliefs change over time, but you, you know, you update yours to stay current.
And everybody around you believes the same thing. The dinner table conversations are, everybody's in 100% disagreement on everything at all times.
But then you go succeed in business, and you build your company, and you build products, and you build new technology, and if your company succeeds, it goes public, and people become wealthy. And then you square the circle of sort of social progressivism and entrepreneurial success and business success.
You square the circle with philanthropy. And so you donate the money to good social causes.
And then someday your obituary says he was both a successful business person and a great human being. Are you tired of being held back by one-size-fits- having your concerns dismissed or being denied that comprehensive lab work you need to truly understand your health? I want to tell you about Merrick health, the premier health optimization platform that's revolutionizing how we approach wellness and longevity.
What sets Merrick apart isn't just their cutting edge diagnostic labs or concierge health coaching. It's their commitment to treating you as an individual.
Their expert clinical team stays at the forefront of medical research, creates personalized evidence-based protocols that evolve with you. Unlike other services that rely on cookie cutter solutions, Merrick Health goes the extra mile.
They consider your unique lifestyle, blood work, and goals to craft recommendations that actually work for you, whether that's through lifestyle modifications, supplementation, or prescription treatments. And with a remarkable 4.9 out of 5 rating on Trustpilot, you know you're in great hands.
The best part is you can get 10% off your order today. Just head to merikealth.com and use code Peterson at checkout.
That's merikealth.com, code Peterson for 10% off. Stop guessing and start optimizing your health today with Merikealth, because your best life starts with your best self.
And basically what I experienced is that

that deal broke down between, you know, 2012, 2014, 2015, and then sort of imploded spectacularly in 2017. And ever since, it's, there has been no way to square that circle, which is if you are successful in business, in tech and entrepreneurship entrepreneurship, if you become successful, you are de facto evil.
And you can protest that you're actually a good person, but you are presumed to be de facto evil. And by the way, furthermore, philanthropy will no longer wash your sins.
And this was a massive change, and this is still playing out, but philanthropy will no longer wash your sins because philanthropy is, you know, unacceptable.

The belief goes, philanthropy is an unacceptable diversion of resources from the proper way that they should be deployed, which is the state, right, to, you know, to sort of a private enterprise form of philanthropy, which is sort of de facto, you know, is now considered bad. And so everybody in my world basically had a decision to make, which was, did they basically go sharply to the left on not just social issues, but also economic issues? And did they become, you know, starkly anti-business, anti-tech, you know, essentially self-hating in order to stay in the good graces of what happened on that side? Or, you know, did they have to, you know, do what Peter Thiel did early on and, you know, go way to the right and basically just punch out and declare that, you know, I'm completely out of progressivism.
I'm completely finished with this and I'm going to go a completely different direction. And obviously, that culminated in, you know, that was part of the phenomenon that culminated in Trump's first election.
And so, anyway, long story short, the manifesto that I wrote is an attempt to kind of bring things back to, you know, what I consider to be a more sensible way to think and operate, you know, a big tent social and political umbrella, but, you know, where tech innovation is actually still good, business is still good, capitalism is still good, technological progress is still good, the people who work on these things actually are still good, and that actually we can be proud of what we do. You said that something changed quite radically in 2017.
I'd like you to delve a little bit more into the breakdown of this deal. Like your claim there was that for a good while, center left positions politically, let's say and philosophically, were compatible with the tech revolution and with the big business side of the tech revolution.
but you pointed to a transformation across time that really became unmistakable by 2017. Why 2017 as a year and what is it that you think changed? You painted a broad scale picture of this transformation and also pointed to the fact that it was no longer possible to be an economic capitalist, to be a free market guy, and to proclaim allegiance to the progressive ideals.
That became impossible. And in 2017, what do you think happened? How do you understand that? Yeah, so different people, of course, have different perspectives on this, but I'll tell you what I experienced.
And I think in retrospect, what happened is Silicon Valley experienced this before a lot of other places in the country and before a lot of other fields of business. And so I have many friends in other areas of business who live and work in other places where I would describe to them what was happening in 2012 or 2014 or 2016.
And they would look at me like I'm crazy.

And I'm like, no, I'm describing what's actually happening on the ground here.

And then, you know, three years later, they would tell me, oh, it's also happening in

Hollywood or it's also happening in finance or it's also happening in, you know, these

other industries.

So in retrospect, I think I had a front row seat to this just because Silicon Valley was,

you know, I've been using this term first in.

Silicon Valley was first in. Like Silicon Valley was the industry that went the hardest for this transformation up front.
And so what we experienced in Silicon Valley, and then, you know, the nature of my work, you know, over this entire time period, I've been a venture capitalist and an investor. And so the nature of my work is I've been exposed to a large number of companies all at the same time, some very small, and then, by the way, also some very large.
So for example, I've been on the Facebook board of directors this entire arc, right? And a lot of what I'm describing, you can actually see through just the history of just the one company, Facebook, which we can talk about. But anyway, so I think I basically saw the Vanguard movement up close.
And essentially what I saw was, it was really 2012, it was the beginning of the second Obama term. And it was sort of the aftermath of the global financial crisis.
And so it was some combination of those two things. Right.
So the global financial crisis hits in 2008. Occupy Wall Street takes off, but it's this kind of fringe thing.
You know, the sort of, you know, Bernie Sanders starts to activate as a national candidate. Some of these, you know, other politicians on the sort of further to the left start to become prominent, start to take over the Democratic Party.
And then the economy caved in, right? So we went through a severe recession between, call it 2009 to 2011. 2012, the economy was coming back.
People maybe weren't worried about being fired anymore, right? If people think they're going to get fired in a recession, they generally don't act out at a company. But if they think their jobs are secure in an economic boom, they can start to become activists.
And so the sort of employee activist movement started around 2012. And then the Obama second term, I would say the progressives in the Democratic Party kind of took more control, kind of starting around that time.
And the Obama administration itself kind of turned to the left. And so you started to get this kind of activated political energy, the activist movements in these companies, where you had people who the year before had been a quiet web designer working in their cubicle, and then all of a sudden, they're a social and political revolutionary inside their own company.
And then, by the way, the shareholders activated, which was really interesting. This is when Larry Fink at BlackRock decided he was going to save the world.
And then the press activated.

And so all of a sudden, you know, the same tech reporters who had been very happy covering tech and talking about exciting new ideas all of a sudden became, you know, kind of very accusatory and started to condemn the industry. So that started to pop around 2012.
And then what I saw, you might even describe it as like a controlled skid that became an uncontrolled skid, which was that energy built up in tech between 2012 and 2015. And then basically what happened in rapid succession was Trump's nomination and then Trump's election, his victory in 2016.
And I described both of those events as like 10xing of the political energy in this system. And so, you know, both of those events really activated, you know, very strong antibody responses, you know, which, as you know, culminated in like mass protests in the streets right after the 2016 election.
And then, of course, the narrative then became, you know, crystallized, which is there are the forces of darkness represented by Trump, represented by the right, represented by capitalism, represented by tech, and there are the forces of light represented by wokeness and, you know, the racial reckoning and, you know, the George Floyd protests and so forth. And it, you know, became this, you know, very clear litmus test.
And so, the pattern basically locked in hard in 2017 and then continued to escalate from there. So, in your manifesto, you list some of these ideas that were pathological, let's say, that emerged on the left.
And I just want to find the, well, you, for example, you say technology doesn't care about your ethnicity, race, religion, national origin, gender, sexuality, political views, height, weight, etc., listing out the dimensions of hypothetical oppression that the intersectionalist woke mob stresses continually. Now, you point your finger at that, obviously, because you feel that something went seriously wrong with regard to the prioritization of those dimensions of difference.
And that's part of the movement of diversity. That's part of the movement of equity and inclusivity.
Let me just find this other. Yes, here we go.
Our present society has been subjected to a mass demoralization campaign for six decades against technology and against life under varying names like existential risk, sustainability, ESG, sustainable development goals, social responsibility, stakeholder capitalism, precautionary principle, trust and safety, tech ethics, risk management, degrowth. The demoralization campaign is based on bad ideas of the past, zombie ideas, many derived from communism, disastrous then and now that have refused to die.
And that's in the part of your manifesto that is subtitled the enemy. That's an enemy of the enemy you're characterizing there as a system of ideas.
And I guess that would be the system of woke ideas that presumes, and correct me if I get this wrong, that presumes that we're fundamentally motivated by power, that anybody who has a position of authority actually has a position of power. The best way to read positions of power is from the perspective of a narrative that's basically predicated on the hypothesis of oppressor and oppressed, and that there are multiple dimensions of oppression that need to be called out and rectified.
And the DEI movement is part of that. And so you point to the fact that these are zombie ideas left over, let's say, from the communist enterprise of the early and mid-20th century.
And that seems to me precisely appropriate. And you said you thought those ideas emerged on the corporate front in a damaging way, first in big tech.
You know, I probably saw that most particularly, evidence of that most particularly in relationship to the scandal that surrounded James Damore. Because that was really cardinal for me, because like I spent a fair bit of time talking to James, and my impression of him was that he was just an engineer.
And I don't mean that in any disparaging sense. He thought like an engineer.
And he went to a DEI meeting and they asked him for feedback on what he had observed and heard. And James, being an engineer, thought that they actually wanted feedback, you know, because he didn't have the social skills to understand that he was supposed to be participating in an elaborate lie.
And so he provided them with feedback about their claims, especially with regards to gender differences. And James actually nailed it pretty precisely for someone who wasn't a research psychologist.
He had summarized the difference in the literature on gender differences, for example, extremely accurately. And they pilloried him.
And I thought, that's really bad because it means that, you know, Google wouldn't stand behind its own engineers when he was telling the truth. And there was every attempt made to destroy his career.
Now, why do you think that whatever happened affected tech first? And What did you see happening that you then saw happening in other corporations? Yeah, so why did it happen in tech first? A couple things. So one is tech is just, I would say, extremely connected into the universities.
And so almost everything we do flows from the computer science departments and the engineering departments at major U.S. research universities.
And we hire kids from new graduates all the time. And so we just have a very, very tight – and we work with university professors and research groups all the time.
And so there's just a direct connection there. And so it's like if an ideological pathological virus is going to escape the university and jump into the civilian population, it'll hit tech first, which is what happened.
Or maybe, you know, tech and media first. So that's one.
And then two, you know, two is I think the sort of psychological sort that happens when kids decide what profession to go into. And, you know, what we get are the very high openness people.
You know, the highest openness people come out of college, you know, who are also high IQ and ambitious. And they basically, you know, they go into tech, they go into creative industries, or they go into media, right? They're sort of the, you know, where they sort into.
And so, we also get the most open. And by the way, also ambitious, right? We get the, you know, the ambitious, driven, driven, you know, as you say, high industriousness ones as well.
And then, and then, you know, that's the formula for a highly effective activist, right? And so, so, so, so we got, we got the full load of that. And then look, the, you know, this movement, you know, that we now call wokeness, you know, it, it hijacked, you know, it hijacked what I would, you know, call sort of at the time, you know, bog standard progressivism, which is, you know, of course you want to be diverse and of And of course, you want to be inclusive.
And of course, you want everybody to feel included. And of course, you want to be kind.
And of course, you want to be fair. And of course, you want a just society.
And, you know, that was part of the, you know, just moderate belief set that everybody in my world had, you know, for the preceding certainly 20 years. And so at first, it just felt like, oh, this is more of what we're used to, right? This is, you know, of course course this is what we want.
But, you know, it turned out what we were dealing with was something that was far more aggressive, right? You know, a much more aggressive movement. And then this activism phenomenon.
And then this became a very practical issue for these companies, like on a day-to-day basis. And so you mentioned the Demore incident.
So I talked to executives at Google while that was going down because that was so confusing for me at the time. And the reason they acted on him the way they did and fired him and ostracized him and did all the rest of it is because they thought they were hours away from actual physical riots on the Google campus.
Like, they thought employee mobs were going to try to burn the place down physically, right? And that was such, at the time, like that was such an aberrant, you know, phenomenon, expectation. There were other companies, by the way, at the same time that were having all-hands meetings that were completely unlike anything that we'd ever seen before, that you could only compare to struggle sessions.
You know, there's the famous, the Netflix adaptation of Three-Body Problem starts with this very vivid recreation of a Maoist-era, you know, communist Chinese struggle session, right, where the students are on stage and, you know, the disgraced, you know, professor is on stage confessing his sins and, you know, then they beat him to death. And, you know, the inflamed passions of the young, ideologically, you know, consumed crowd that is completely convinced that they're on the side of justice and morality.
you know, fortunately, nobody got beaten to death, you know, at these companies on stage in an all-hands meeting. But you started to see that same level of activated energy, that same level of passion.
You started to see hysterics, you know, people crying and screaming in the audience. And so, you know, these companies knew they were at risk from their employees, up to and including the risk of actual physical riots.
And that at the time, of course, was like a completely bizarre thing. And we, you know, we at the time had no idea what we were dealing with.
But it was, in retrospect, it was through events like what James Damore went through that we ultimately did figure out what this was. Okay, okay.
So let me ask you a question about that. You know, it's a management question, I guess.
So I had some trouble at Penguin Random House a couple of years ago after writing a couple of bestsellers for them. I was contracted with one of their subdivisions and they had a bit of an employee rebellion that would be perhaps reminiscent of the sort of thing that you're referring to.
And they kowtowed to them, and I ended up switching to a different subdivision. Now, it really made no material difference to me, and I was just as happy to be with a subdivision where everybody in the company, visible and invisible, was working to make what I was doing with them successful rather than scuttling it invisibly from behind the scenes.

But my sense then was, why don't you just fire these people? And so, well, and I'm dead serious about that. It's like, first of all, I'll give you an example.
So we just set up this company, Peterson Academy Online, and we have 40,000 students now and about 30 professors, and we're doing what we can to bring extremely high quality, elite university level education to people everywhere for virtually no money. And that's working like a charm.
Now, we set up a social media platform inside that so that people could interact like they do on Twitter or Facebook, etc., Instagram, because we try to integrate the best features of those networks. But we wanted to make sure that it was a civilized place.
And so the fact that people have to pay for access to it helps that a lot, right? Because it keeps out the trolls and the bots and the bad actors who can multiply accounts beyond comprehension for no money. And so the mere price of entry helps.
But we also watched. And if people misbehaved, we did something about it.
And we kicked four people out of 40,000. And one of them we put on probation.
And that was all we had to do. You know, there was goodwill and everybody was behaving properly.
And like I said, there was a cost to entry. But it didn't take a lot of discipline.
It didn't take a lot of disciplinary action to make an awful lot of difference with regard to behavior. And so, you know, I can understand that Google might have been apprehensive about activating the activists within their confines.
But sacrificing James Damore to the woke mob because he told the truth is not a good move forward. And I just don't understand at all.
You see, and the same thing happened at Penguin, at Penguin Random House. It's like, you could just fire these people.
Like, they were people there who wanted to not publish a book of mine that they hadn't even read. You know, they weren't people who deserve to be working at what's arguably the greatest publishing house in the world.
So, why, you alluded to it a little bit. You said that people were taken by surprise, you know, and fair enough.
And it was the case that there was a radical transformation in the university environment somewhere between 2012 and 2016, where all these terrible, woke, quasi-communist, neo-Marxist ideas emerged and became dominant very quickly. But I'm still, why do you think that that was the pattern of decision that was being made instead of taking appropriate disciplinary action and just ridding the companies of people who were going to cause trouble? Yeah, so there's a bunch of layers to it in retrospect.
And let me say that this, what you described has, it is what's happening now. So in the last two years, a lot of companies actually are, at long last, they are firing activists, and we can talk about that.
And so I think the tide is turning on that a bit. But going back in time, going back in time between 2012 and let's say 2022, so like a full 10-year stretch where what you're describing didn't happen.
I think there's layers. So one is, as I said, just people didn't understand it.
I think, quite frankly, number two, a lot of people in charge agreed with it, at least to start, right? And so they saw people who had what appeared to be the same political, ideological leanings as they did and were just simply more passionate about them. And so they thought they were on the same side.
They agreed with it. And then at some point, they discovered that they were dealing with something different, maybe a more pure restraint or a more fundamentalist approach.
At that point, of course, they became afraid, right? And so they were afraid of being lit on fire themselves. and by the way I would describe you know I think tech is starting to work its way out of this I think Hollywood is still not

and my friends

in Hollywood

when I talk to them

not at all

not at all

when I talk to people

who are in serious

positions of responsibility

in Hollywood

you know starting to work its way out of this. I think Hollywood is still not, and my friends in Hollywood, when I talk to them— Oh, not at all.

Not at all. When I talk to people who are in serious positions of responsibility in Hollywood,

you know, after a couple drinks in sort of a zone of privacy, you know, it's pretty frequently

they'll say, look, I just can't—it's still too scary. Like, I can't go up against this because

it'll ruin my career. So, you know, there is this group frenzy, cancellation, you know,

ostracizing, career destruction thing. That's real.
But let me highlight two other things. So one is, it wasn't just the employees.
It was the employees. It was a substantial percentage of the executive team.
It was also the board of directors in a lot of cases. And so you'd have politically activated board members.
And some of these companies still have that, by the way. It was also the shareholders.
And you would think that investors in a capitalist enterprise would only be concerned with economic return. And it turns out that's not true, because you have this intermediate layer of institutions like BlackRock, where they're aggregating up lots of individual shareholders.
And then the managers of the intermediary can exercise their own politics using the voting power of aggregated small shareholder holdings. And so you had the shareholders coming at them.
Then, by the way, you also had the government coming at them. And this administration's been very aggressive on a number of fronts.
We could talk about a bunch of examples of that, but you have direct government pressure coming at you. You have the entire press corps coming at you, right? And so it feels like it's the entire world, you know, bearing in on you, and they're all going to light you on fire.
And then that takes me to— Well, and that does happen. That does, like, what we should also point out, that's not a delusion.

I mean, part of also, it's also, I think, the case that the new communication technologies that make the social media platforms so powerful have also enabled reputation savagers in a way that we hadn't seen before. because you can accuse someone from behind the cloak of anonymity and gather a pretty nice mob around them in no time flat with absolutely no risk to yourself.
And, you know, there's a pattern of antisocial behavior that characterizes women. And this has been well documented for 50 years in the clinical literature.
Like, antisocial men tend to use physical aggression, bullying. But antisocial women use reputation savaging and exclusion.
And it looks like social media, especially anonymous social media, what would you say, enables the female pattern of aggression, which is reputation savaging and cancellation. Now, I'm not accusing women of doing that.
You've got to get me right here. It's that there are different pathways to antisocial expression.
One of them, physical violence, isn't enabled by technology. But the other one, which is reputation savaging and exclusion, is clearly abetted by technology.
And so that's another feature that might have made people leery of putting their head up above the turret. You know, like in Canada, well, I'm still being investigated by the Ontario College of Psychologists, and I'm scheduled for re-education if they can ever get their act together to do that.
And I fought an eight-year court battle, which has been extremely expensive and very, very annoying, to say the least. And I don't think that there's another professional in Canada on the psychological or medical side who's been willing to put their head above the parapet except in brief, you know, in brief interchanges.
And the reason for that is it simply is too devastating. And so I have some sympathy for people who are concerned that they'll be taken out because they might be.
But, you know, by the same token, if you kowtow to the woke mob for any length of time, as the tech industry appears to be discovering now, you end up undermining everything that you hold sacred. I mean, you alluded to the fact that you'd hope that at least the shareholders would be appropriately oriented by market force forces, greed to put it in the most negative possible way.
And you'd hope that that would be sufficient incentive to keep things above board, because I'd way rather deal with someone who's motivated by

money than motivated by ideology. But even that isn't enough to ensure that even corporations act in their own best economic interest.
So it is a perfect storm. And you alluded to government pressure as well.
And so maybe you could shed a little bit more light on that because that's also particularly worrisome.

And it's certainly been something that's characteristic and is still characteristic of Canada under Trudeau. Yeah, so there's a couple of things on that.
So one is, I should just note, and I'm sure you'll agree with me on this, there are many men who also exhibit that reputational destruction motive. Absolutely.
Men will use it. They typically don't in the real world.
But if the pathway is laid open to it on social media, let's say, and there's a particular kind of man who's more likely to do that too. Those are the dark tetrad types who are narcissistic and psychopathic and Machiavellian and sadistic.
Lovely combination of personality traits. And they're definitely enabled online.
So, yeah. So, we've had plenty of them as well.
Yeah. So, the government pressure side.
So, when this all hit, you know, like I said, I didn't, nobody I knew understood what was happening. I didn't understand it.
And so, I, you know, I did what I do in circumstances like that. And I I basically tried to work my way backwards through history and figure out, you know, where this stuff came from.
And I think like for pressure on corporations, you know, the context for this is that corporations, corporations are, there's this cliche that you'll hear actually interesting from the left, which is, well, private companies can do whatever they want. They can censor whoever they want.
Private companies have total attitude to do whatever they want. And of course, that's totally untrue.
Private companies are extensively regulated by the government. Private companies have been regulated by a civil rights regime imposed by the government for the last 60 years.
That civil rights regime certainly has done many good things in terms of opening up opportunities for different minority groups and so forth to participate in business. But that you know, that civil rights regime put in place this standard called disparate impact in which you can evaluate whether a company is racist or not on the basis of just raw numbers without having to prove that they intended to be, right, in terms of, like, who they select for their employees.
And so companies, you know, predating the arrival of what we call woke, they already had legal and regulatory and political and compliance requirements put on them to achieve things like racial diversity, gender diversity, and so forth. I grew up in that environment.
I considered that totally normal for a very long time. I just figured that's how things worked, and that was the positive payoff from the civil rights movement and from the 1960s, and that was just the state of play.
And, you know, and by the way, it was, I think, manageable and good in some ways. And, you know, like, kind of on and away we went, like we could deal with it.
But basically, what happened was, when woke arrived, that regime was enormously intensified. And what happened was a sequence of events, and literally, there was a playbook where, for example, per DEI, there was a sequence of events where activists and employees and board members would push you.
First of all, you had to start doing explicit minority statistical reporting.

So you had to fully air in public any, you know, disparate impact, any differences in, you know,

racial, gender, ethnic, sexual, you know, differences relative to the overall population.

In a statistical report, you had to debate every year.

And of course, they would tell you, as long as you issue this report, you're fine. Well, of course, that wasn't the case.
What followed the report was, okay, now you need what's called the Rooney Rule. And the Rooney Rule basically says you have to have statistically proportionate representation of candidates for every job opening relative to the overall population.
Right. So stop there for just a sec, because we should delve into that.
That's a terrible thing, because we can think about this arithmetically. It's like you have to have proportionate representation of all protected group members in all categories.
Okay, there's a lot of horror in those few words, because the first problem is those categories are multiplicable without end. And you see this, for example, with the continued extension of the LGBT acronym.
There's no end to the number of potential dimensions of discrimination that can be generated. And then, so that's an unsolvable problem to begin with.
It means means you're screwed no matter what you do but it's worse than that when you combine that with the doctrine of intersectionality because not only do you then have the additive consequence of these multiple dimensions of potential um prejudice so for example in canada it's um it's illegal to discriminate on the basis of gender expression. Okay, that's separate from gender identity.
So now there's a multitude of categories of gender identity, hypothetically. I mean, the estimates range from like 200 to 300.
But gender expression is essentially how you present yourself. I think it's technically indistinguishable from fashion, fundamentally.
And I'm not trying to be a prick about that. I mean, I've looked at the wording, and I can't distinguish it conceptually.
It's mode of self-presentation, hairstyle, dress, etc. And so that means you can't discriminate on the basis of whatever infinite number of categories of gender expression you could generate.
And then if you multiply those together, I mean, how many bloody categories do you need before you multiply them together? You have so many categories that it's impossible to deal with. So there's a really, there's a major technical problem at the bottom of this realm of conceptualization that's basically making it A, impossible for companies to comply and exposing them to legal risk everywhere, but also that provides an infinite market for aggrieved and resentful activism.
Yeah, that's right. It feels like what we saw.
So reporting leads to candidate pools. Candidate pools, the pressure then is, well, you need to hire proportionately according to whatever these categories are, including all the new ones.
And then hiring means, then step four is promotions. You need to promote at the same rate, right? And the minute you have that requirement, of course, now any performance metrics are just totally out the window because you can't, right? You just have to promote everybody identically, right?

And that's sort of the slide into the complete removal of merit from the system.

And then, by the way, the fifth stage is you have to lay off proportionately, right?

And so you're bound on the other side.

And what happens is precisely what I'm sure you know happens and what you've seen happen.

What happens is a descent of the culture of the company into complete dog-eat-dog, us versus them. The employee base starts to activate along these identity lines inside the company.
These companies all created what are known as this incredible euphemism of employee resource groups, ERGs, which is basically segregated employee affiliation groups. And so, you now have have the employees, you know, the employees aren't employees of your company.
The employees are members of a group who just happen to be at your company, but their group membership along whatever axis we're talking about, their group membership ends up trumping, you know, their role as employees. And then you have this internal dissent into, you know, accusations, into fear.
You know, you have, you know, this incredible, you know, tokenization that takes place where, you know, anybody from an underrepresented group is, you know, the classic problem of affirmative action, any member of an underrepresented group is assumed to have gotten hired only because of their, you know, skin color or their sex, you know, which is horrible for members of that group. And so you get this, you know, downward slide.
Especially the competent ones. Especially the competent ones.
It's terrible for the competent ones, yeah. Exactly.
And so it's, you know, it's acid. You're pouring cultural acid on your company and the entire thing is devolving into complete chaos internally.
And what's happening is the activists and the press and the board and everybody else is pressing you to do this. And then the government on top of that is pressing you to do it.
And under this last administration, that reached entirely new heights of absurdity. So let me take a step back.
Once you walk down this path and go through all those steps, I believe there's no question you now have illegal quotas. And you have illegal hiring practices, and you have illegal promotion practices.
And by the way, you also have illegal layoff practices. I think any reading of U.S.
civil rights law, which says you are not allowed to discriminate on the basis of all these characteristics, you have worked yourself into a system in which you are absolutely discriminating on the basis of these characteristics through actual hard quotas, which are illegal. And so to start with, I think all of these companies that implemented these systems, I think they've all ended up basically being on the wrong side of civil rights law, which is, of course, this incredibly ironic result.

Right? systems, I think they've all ended up basically being on the wrong side of civil rights law, which is, of course, this like incredibly ironic result, right? They've all ended up with the legal quotas. I mentioned Hollywood earlier, you know, Hollywood has gone all in for, you know, they literally now publish their hard quotas.
The studios have these statements that says by X date, you know, 50% of our, you know, producers and writers and actors and so forth are going to be from specific groups. And again, you just read like the Civil Rights Acts and it's like, okay, that's actually not legal and yet they're doing it.
This administration, this last administration, the Biden administration really hammered this in and they put these real radicals in charge of groups like the Civil Rights Division of the Department of Justice. And the sort of ultimate, like, amazing expression of this, you know, bizarre expression of this was SpaceX, one of Elon's companies, got sued by the Civil Rights Division of this Department of Justice for not hiring enough refugees, right, not hiring enough foreign nationals who, you know, had come, you know, either, you know, illegals or, you know, coming in through a refugee path, notwithstanding the fact that SpaceX is a federal contractor and is only allowed in most of its employee base to hire American citizens.
And so the government simultaneously demands of SpaceX that they only hire American citizens and that they hire refugees. And the government views no responsibility whatsoever to reconcile that.
You're guilty either way, right? And then again, general companies are in this bind now

where if they do everything

they're supposed to do,

they end up in violation

of the civil rights law,

which they started out

by trying to comply with.

And this has all happened

without reason

and rational discussion.

This has all happened

in a completely hysterical

emotional frenzy.

And what these companies

are realizing is they're now

on the other side of this

and there's just simply

no way to win. Well, there's another, there's an analog to that, which is very interesting.
I mean, I started to see all this happen back in 1992 because I was at Harvard when the Bell Curve was published. And I watched that blow up the department at Harvard and it scuttled one of my students' academic careers for reasons I won't go into.
But, well, I was working with that student on developing validated predictors of academic, managerial, and entrepreneurial performance. I was very interested in that scientifically.
What can you measure that

predicts performance in these realms? And the evidence for that's starkly clear. The best predictor of performance in a complex job is IQ.
And psychologists tore themselves into shreds, especially after the bell curve, trying to convince themselves that IQ didn't exist. but it is the most well established phenomenon in the social sciences, probably by something approximating an order of magnitude.
So if you throw out IQ research, you pretty much throw out all social science research. And so that turns out to be a big problem.
Now, personality measures also matter. Conscientiousness, for example, for managers and openness, which you mentioned earlier, for entrepreneurs.
But they're much less powerful, about one-fifth as powerful as IQ. Now, the problem is that IQ measures show racial disparities.
And that just doesn't go away, no matter how you look at it. Now, at the same time, the US justice system set up a system of laws that govern hiring that said that you had to use the most valid and reliable predictors of performance that were available to do your hiring, your placement and your promotion, but none of those could produce disparate impact, which basically meant, as far as I can tell, whatever procedure you use to hire is de facto illegal.
Now, so lots of companies, and one of the, I've never, I don't know why this hasn't become a legal issue. So you could say, well, we use interviews, which most companies do use.
Well, interviews are very, they're not valid predictors of performance. They're not much better than chance.
Structured interviews are better, but ordinary interviews aren't great at all. So they fail the validity and reliability test.
And so I don't think there is a way that a company can hire that isn't illegal, technically illegal in the United States.

And then I looked into that for years, trying to figure out how the hell did this come about? And the reason it came about is because the legislators basically abandoned their responsibility to the courts and decided that they were just going to let the courts sort this mess out. And that would mean that companies would be subject to legal pressure and that there would be judicial rulings in consequence, which would be very hard on the companies in question.
But it meant the legislators didn't have to take the heat. And so there's still an ugly problem at the bottom of all this that no one has enough courage to address.
And so, but the upshot is that, as you pointed out, companies find themselves in a position where no matter what they do, it's illegal. I've had lawyers literally write analysis for this as I've been trying to figure it out, you know, employment law lawyers.
And like literally, you read the analysis and it's very, you know, it is absolutely 100% illegal to discriminate on the basis of these characteristics. And it is 100% absolutely illegal to not discriminate on the basis of these characteristics.
And that is true, right? And both of those are true. It is both illegal to hire—you know, you mentioned interviews.
Interviews are an ideal setting for bias because, you know, even if you just assume most people like people who are like themselves, right? You know, is a member from a certain group going to be more inclined to hire members from that group? You know, probably yes, just if there are no other parameters. And so precisely, you want to get to quantitative measures because you want to take that kind of bias out of the system, but then the quantitative measures are presumptively illegal because they lead to bias through disparate impact.
Yeah, and so, you know, maybe the term Kafka trap, right? You end up in this vice, and then everybody is just so mad that, you know, you can't even have the discussion. And so this is the downward spiral.
On the one hand, I think there's a lot of this that just fundamentally can't be fixed because a lot of these assumptions, a lot of this stuff got baked in. Going back to the 1960s, 1970s, so a lot of this is long-sand settled law, and I don't know that anybody has the appetite to reopen Pandora's box in this.
Having said that, this new administration, the Trump administration coming in, I would say every indication is that the Trump administration's policies and enforcement are going to flip to the other side of this. And so one of the things that's very fascinating about what's happening in business right now is a lot of boards of directors are now basically having a discussion internally with their legal team saying, okay, we cannot continue to do the just overt discriminatory hiring and employee segmentation that we've been doing.
We're not going to be permitted to. And so we have to back way off of these programs.
And you're already seeing Fortune 500 companies starting to shut down DEI programs. And I think you're going to see a lot more of that because they're going to try to come into compliance with what the new Trump regime wants, which will be on the other side of this.
But the underlying issues are likely to stay unresolved. I think in practice, in retrospect, maybe this is too optimistic on my part, but my time in business, 80s, 90s, 2000s, it felt like we had a reasonable detente.
And although you ideally might want to get in there and figure this stuff all out, as long as it's kind of kept to a manageable simmer, you can kind of have your cake and eat it too, and people can kind of get along and it's okay. Maybe it's not a perfectly merit-based system, or maybe there's issues along the way, but fundamentally, companies worked really well for a long time.
If you can work your way out of this sort of elevated level of hysteria. And optimistically, I would say that that's starting to happen.
And the change in legal regime that's coming, I think, will actually help that happen. Right.
So you're optimistic because you believe that the free market system is flexible enough to deal with ordinary stupidity,

but like insane malevolent stupidity is just too much.

Yeah.

I think that's reasonable,

you know?

Well,

I do think that's reasonable because everything's a mess all the time and

people can still manage to manage their way forward.

But when you,

when you have a policy that says,

well,

any,

any identifiable,

any identifiable disparate outcome with regard to any conceivable combination of groups is indication of illegal prejudice, there's no way anybody can function in that situation. Because those are impossible constraints to satisfy.
And they lead to paradoxical situations like the one you described Musk's company as being entangled in, right? That's just so frustrating for anybody that's actually trying to do something, you know, that requires merit, that they'll just throw up their hands. And so, yeah, yeah, yeah.
Okay, so I'm going to stop you there, because we're out of time on the YouTube side, but that's a good segue for what will continue on the Daily Wire side because we've got another half an hour there. And so for all of you watching and listening, join us, join Mark and I on the Daily Wire side because I would like to talk more about, well, what you see could be done about this moving forward with this new administration and how you're feeling about that.
I mean, you made a decision, I guess, early in 2023, like so many people, to pull away from the Democrats and toward Trump, strange as that might be. And I'd like to discuss that decision and then what you see happening in Washington right now and what you envision as a positive way forward so that we can all rescue ourselves from this mess before we make

it much deeper than it already is. So for everybody watching and listening, join us on the Daily Wire

side. And Mark, thank you very much for talking to me today.
I hope we get a chance to meet in San

Francisco in relatively short order. And I'm also looking forward to continuing our discussion in a

couple of minutes. Join us, everybody, on the Daily Wire side.
Good. Thank you, Jordan.