Ep 28 | Ryan Khurana | The Glenn Beck Podcast
Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen and follow along
Transcript
The world is about to change and if you feel a little overwhelmed or you're not sure what to make,
is it a sci-fi movie?
Is any of this stuff possible?
Drones that can kill people automatically and identify people?
Are social media?
Is Google nudging us one way or another?
What does it mean to even have privacy?
Gene splicing, making genetically perfect children.
What is the future?
You don't want to miss my conversation today today with Technology Policy Fellow from Young Voices.
He's also the executive director of the Institute for Advancing Prosperity.
It's a Canadian nonprofit organization focusing on the social impacts of technology.
He graduated from the University of Manchester where his dissertation was on the impact of artificial intelligence.
So how do we navigate all of these pitfalls of these revolutionary technologies?
What does the world look like in a year, five years, ten years?
What are the benefits?
What are the safeguards to liberty?
We are experiencing now emerging technology that is going to change all of our lives.
What does it mean for you?
Our conversation with Ryan Gurana.
So, let me get a feel for you before we go into this.
I am someone who believes that there are two possibilities, and maybe a mixture of the two, but I think it's going to lean hard one way or another.
That the future is going to provide mankind, this new technology, with more freedom, experiences that we can't even imagine now.
Literally, in 10 years, our life will be completely different.
And it could be
fantastic.
It also
could
either be the biggest prison
or
the end of
humans as they are today.
Which camp are you in?
I would say I'm in neither camp.
I think both of those are
far-flung possibilities.
And if we look at technological advance throughout history, it's always been that as soon as a new technology comes out, it causes mass panic, it causes a lot of crisis.
One of the most famous examples would be the printing press.
As soon as the printing press comes out, you completely change the way society functions.
30 years of war and chaos, and Europe has to completely reorganize the very conception of how it works.
After that, you have a lot more prosperity.
What technologies do is they challenge existing orders, and it doesn't inevitably lead to prosperity, and it doesn't inevitably lead to chaos.
But people have an incentive that while that change is occurring, to try to figure out how to best manage it, how to best utilize them, how to adapt to the new world they create.
And then you find this equilibrium where things are slightly better, or much better, or slightly worse.
And that's manageable.
Okay, so I think we're
some of what people are feeling right now, everything seems to be in chaos.
And that's because the systems, no matter what you're talking about, it's all breaking down because it doesn't work this way anymore, you know?
We have all this new technology, which
is not functioning with something that feels like 1950, you know?
And so you know that we're on the verge of change.
That's causing anxiety.
but like for instance the industrial revolution that changed a lot but it was over a hundred years
this change is happening so fast and it is so dynamic and it is so pervasive
how do you not see us i mean let's just start let's just start with
surveillance capitalism
blessing and a curse
it is providing us with
services that are beyond our understanding even 10 years ago.
But it is also monitoring us at all times.
And it could be used like it's being used in China.
And are you concerned at all about that here?
Let's.
When we talk about surveillance capitalism, the production of so much data, we have to really step back and ask, what are we worried about?
Are we worried about the data collecting it?
Or are we worried about people using it in harmful ways?
Yes.
Using it in harmful ways.
Using it in harmful ways.
And in many cases, what we need to do there is kind of step back and let it sit in a system.
For example, the way a lot of companies use your data for their algorithms, nobody's looking at that data, nobody's really analyzing what you're doing, and no one's, no human being is making a decision that can affect your life.
But a system is being worked to isolate points of that data which are beneficial to you.
And as long as the correct incentives are in place for companies to use that in a way that's beneficial to you, I don't find that worrisome.
What I do find worrisome is if we have institutions that start to break down, if we have these companies act with your data in such a way that they can do anything and no one holds them accountable.
But there is no reason that that would be the case.
And there's no reason that that data collection alone enables that to be the case.
But we know that they are nudging us.
You know, and that's, that's, that is
just as evil as some guy with a, you know, curly mustache who's like, I'm going to control the world.
They are set on a mission that they believe, and it's going to be left or right, but they believe what they're doing is right, that they know the voices that are hateful, they know the voices of peace and prosperity, and they're selecting, and just through their algorithms, they can nudge.
And
And we know that to be true.
We know that they're doing that now.
Aaron Powell, but
that nudging exists in all spheres of life.
It's not like before the internet when we just had cable T V, we weren't being nudged.
There was a much lower selection of channels and options and each one had its agenda and each one pushed you in a certain direction.
Right now what the concern is, is not about the nudging, it's about how many points of contact do I have on the internet to make decisions.
Does one person nudge everybody, or are there different options for me to go to, and I can choose and select based on what I like the most?
So, this is really a competition question.
If these companies are monopolies, all right, we have some concerns
that that nudging is worrisome.
Or if those companies are in bed with a government, yes, that's also similarly concerning.
You have that in places like China, where you were mentioning earlier.
You have a member of Congress this week suggesting that when it comes to vaccinations, that there should be a public-private partnership between YouTube and Google and Twitter to remove those voices that say vaccinations are bad.
I happen to be pro-vaccination, but I think everybody should be able to make their own decision and you should never ban books, ban opinions.
When it comes to a question like vaccinations, I actually kind of believe that most of these companies are really headed in a different direction than the United States government.
We have Google pulling out of Department of Defense contracts and the like.
They're not that embedded the way a lot of large companies were during the Cold War with the United States.
At the same time, though, Google is in bed with China.
I wouldn't call it in bed.
They do have a Beijing research center.
They are trying to leverage the vast amount of data that's produced in China.
Remember, they have a lot more people that use the Internet far more than Americans do.
And so that's a very valuable resource for these companies to develop better technologies and for them to open to new markets and be profitable.
But that doesn't mean they're in bed with the Chinese government.
Dragonfly?
And they said they weren't doing it, but new reports out now, internal reports, say they are still working on Dragonfly.
So we have to remember what something like Project Dragonfly is.
A Project Dragonfly, just as Google tried to go into China many times and always had some resistance and had to pull out, is Google's attempt to try to make their search engine compatible with what the Chinese allow in their country.
And to them, that's a market to make more profit and also a market to protect themselves against Chinese competitors that become more internationally dominant than they are.
Okay, so let's let's
see if we can
and if this doesn't work, just let me know.
Let's see if we can divide this into
two camps to start.
One is the 1984 camp, I would call it 2025, China 2025, which
would you agree that the 5G network
from China is a way for them to control information around the world?
I wouldn't go that far.
I would not go that far.
I do believe that
companies like Huawei, who are world leaders in 5G infrastructure, may present national security concerns if they control the majority of the infrastructure that is built in a country like the United States.
That is different than saying that it is a 5G plan to control information around the world.
I think that that is...
Well, that's 2020.
That's China.
That's their stated plan, China 2025.
Well, well, made in China 2025 is more about being the technological superpower.
And
that goes in line with what the United States has tried to do forever.
The Chinese want to be richer than us.
That makes sense for a country as large as them to want to be.
But you also have China doing something that we would never do, and that is full surveillance in China 2020, full surveillance with a social credit score that is so far,
it's so dystopian that we can't even get our arms around that.
So
they come
at things differently than we do.
Oh, no, absolutely.
And I think that's what makes the conversation about China's technological vision more complicated.
They come at things very differently than the United States does.
When we talk about the social credit system,
if you look at the way that it's being implemented in a lot of different areas in China, they don't have that unified national vision yet.
That's what they're trying to get to.
But in some places, I was reading about in rural towns, they have the elderly go around and give people stars when they do good things.
And if you look at these are not government reports, these are independent scholars going in and interviewing people.
Most people like the program.
It has a very high approval rating because they view their society as so untrustworthy that these little nudges to care about your community more and be a more moral citizen are welcomed.
Now, the reason why that wouldn't fly in a place like the United States is historically a a nation like China sees it as a role of the government to help boost up moral values and make them a more unified community.
You are.
And I don't think the United States would allow our government to enforce moral values here.
You are the happiest guy I think I've met in tech.
They're enforcing moral values.
They're also the country that slaughters people by the millions.
And
they're building
uh what do they call them uh they're not re-education camps they're uh
it it's it's almost like a community college for the you know for the muslims uh
over in in china so i'm not going so far as to defend what china's doing or welcome it here
but i'm saying it fits with the cultural vision not only that China the Chinese have of their government, but what the government
has gotten away with doing before.
Right.
Russia is the same.
Russia is the same way that people are used to being.
We are not used to being spied on.
And we wouldn't, well, maybe we would.
I'd hope we wouldn't tolerate
that.
However,
we seem to be headed in that direction.
And
so one is 1984, where if you get out of line,
You know, I think one of the reasons why they're doing this is they are afraid of their own people in revolution.
If there's a real downturn economically,
they need to have control.
We have it, on the other hand, where
I don't think anybody is necessarily nefarious here in America.
I think everybody's trying to do the right thing.
However, at some point, the government is going to say, you know, you guys have an awful lot of information on people, and you can help us.
I'm not a fan of the Patriot Act.
Maybe you are.
But you can help us.
And
Silicon Valley will also know we don't want Washington against us.
Washington will say we don't want Silicon Valley.
So let's work together a little bit.
And to me, that is
frightening because it's more of Brave New World.
We're handing.
for convenience.
We're just handing everything to people.
I think between those two scenarios, the the Brave New World one is far more likely.
And the reason why, I think, is a lot of people, I would call it uncritically, adopt new things.
It's convenient, and I don't know what I'm getting into.
And that convenience is worth the trade-off.
And by the time that trade-off is made known to you, your life is so convenient with something new that
you can't go back.
And so if one of those two
possibilities were to happen, it would likely be the one where we agree to pacify ourselves.
That is not to say that this is the path that we're necessarily on.
And to me, this is the reason why I'm in tech policy and why I think that this is such an important field, because what is lacking is this communication.
What scientists and technologists do, it's impressive stuff, but it's hard for most people to understand.
And most of them aren't that great at communicating what they're doing.
And the public en masse can't get into that.
And the journalists in between, most of the people commentating, people who have historically been those translators, have an incentive to hype it up to not really make it clear to you.
And there's a gap in people that can translate the stuff effectively so the public can be engaged.
So that's why I'm excited to have you on.
I've talked to several people in Silicon Valley.
I've had Ray Kurzweil on and talked about the future with him.
And it is important because I don't think anybody in Silicon Valley is talking to or being heard
in 90% of the country.
And what they're doing is game-changing, and it will cause real panic as it just hits.
And you have a bunch of politicians who are still saying, we're going to bring those jobs back.
No, you're not.
That's not the goal.
for most people in Silicon Valley.
The goal for a lot of people is
100% unemployment.
How great would it be if
no one had to work?
You worked on what you wanted to.
So you have one group going this direction.
Then you have the politician saying, we're going to bring jobs back.
At some point, there's going to be a breakdown and people are going to have to retool.
You have people, I'm trying to remember Mitt Romney's old
company that he was with.
Bain Capital.
Bain Capital.
Bain Capital says we're going to have 30% permanent unemployment by 2030.
I don't know if that's true.
People always say those things.
However, you and I both know, I think,
that our jobs are not going to be like they are now.
Oh, absolutely.
Right.
So there's at least a lot of upheaval and retraining, and that's going to be hard for people over 50.
And nobody's talking to them.
Yeah.
And I think that's a very important concern.
And I think there's two points that you brought up that I think are useful touching on.
One is, yes, retraining is hard for people over 50.
And this is what's happened in almost every industrial revolution we've had thus far.
We remember the Industrial Revolution as being, we have all these new technologies, the world is much more productive, we're all happier.
It was misery for a lot of the people living through it who had to uproot themselves from rural communities and pack into unsanitary urban centers.
It took time for us to learn how to develop the institutions and the kind of governance needed to make sure that this is better than it was before, that this opportunity was taken advantage of.
And we're going through a similar upheaval right now.
And the people that are most affected by the kinds of automation occurring are usually older people who have been at one company for their entire life, who have learned something very specialized and applicable to that company.
When that job disappears, they don't really know how to apply those skills to something different.
And number two are young people just entering the workforce.
A lot of them do routine work.
Routine work is easier to automate.
Those jobs aren't as common.
And I think this pretty well parallels the two types of
people who are most frustrated with the current political scene.
Young millennials looking for work and older people who've lost their classical jobs.
And so you're right.
We have to talk to them.
We have to figure out how do we address their concerns.
But the second point that you brought up is Silicon Valley doesn't talk to 90% of the country, but they're going to get their way anyways.
I don't agree that that's the case.
And the reason why that's not the case is most of these technologies,
if you look at the cool advancements happening in artificial intelligence right now, they're not being filtered into the real world at all.
They're fancy lab experiments.
And the reason why is most people have no idea how to use them.
They don't know how to put them into their businesses.
They don't know how to reorganize their factories to leverage these improvements.
And unless the Silicon Valley talks to the other 90%, these technologies will be for them, and you'll have a couple of people be really rich off of them.
They don't really make that wide of an impact, though.
But historically, they have diffused.
The best example of this is electricity.
At the end of the 1800s, you open your first electrical power plant.
It takes till the 1930s for the United States to be 50% electrified.
That's because if you're going to a business and tell them to use electricity, they think in early 1900s, okay, I save a couple of bucks on my power bill.
But then over time, you realize, well, I can completely change my factory layout.
I can do a lot more cool things.
I can really revolutionize the way I organize society with electricity.
And then you get a boom of change and you really make everyone's lives much better because you realize what power you had.
And until you realize what power you had, a few benefit.
And a lot of people are either unaffected and a few are negatively affected.
So I agree with you.
The only difference is the speed at which we're traveling.
You know, it's funny you brought up electricity because that was going to be my example to you.
Late 1800s, you know, for the Chicago exhibition, we have Niagara Falls generating power.
So that's the first time we'd ever seen a city light up is the Chicago World's Fair.
1930s, you know, because of the depression, hey, let's build some power plants all around.
People, that, that's a long time for them to get used to it.
You know,
most people will say that the next 10 years are going to be by the time we hit 2030, 2035,
the rate of change is going to be breathtaking.
That's true that there's a lot more coming out today.
So it's not something isolated.
And we adapt faster.
I mean, when you think we've only had an iPhone, a smartphone since what, 2008?
Yeah, 2006, I think, the first.
That's crazy.
That's crazy.
It's everywhere now.
Yeah.
Around the whole world.
And
this goes back to the point that cultural adaptation can get rapid when these things diffuse rapidly.
The question is, is all of those other
institutions that build around it.
So you brought up the iPhone.
The smartphone really enabled a lot of the kinds of revolutionary potential that people predicted from the internet when it was announced in the 1980s.
They're like, oh, this is going to change the way we work, the way we communicate, the way we do business, all of that.
Kind of happened.
Smartphone comes out.
You're like, okay, now that potential is realized because I have the internet with me wherever I go.
And so there are people that try to make us aware of what's happening.
try to adapt us to it before because we can all kind of see into the near future and so when we get slightly further into the future, it becomes fuzzier and people are competing on their predictions.
And the people that get to voice their opinions most are either the ones that are the most optimistic or most dystopian, as the starting of this discussion pointed out.
And they dominate the public view.
But if we start talking about, hey, how do we think in 10 years and we have these more modest understandings of what's happening?
people can adapt to them pretty quickly and they can use that adaptation time to understand what they're getting into and use it positively, which is the main point that we're trying to hope that they can do, that people can critically use technologies well.
Let's go to 5G here, because 5G, wouldn't you say, is the biggest game changer on the horizon?
I think 5G is a crucial infrastructure for all of the other interesting technologies that are in development to actually make a dent.
Okay, so explain to people what 5G means, what it can do.
So 5G, which is just the next step of wireless communications after 4G, much lower latency, faster speeds, should be cheaper for everyone to use.
And what that enables,
if there's a universal access to 5G is so
let's take for example cloud computing which right now is used by a lot of enterprise companies.
Google, Microsoft and Amazon are the three big providers of it for most
people.
What cloud computing does is you don't have to spend a lot of money on storage.
you don't have to spend a lot of money on software, you don't have to spend a lot of money on computing power.
You use the internet, you use our servers, you can do that at home.
Now,
we have these cool AI technologies that optimize things really well.
These are very data-hungry, these are very hardware-intensive.
If you don't have cloud computing, only the richest of the rich can have access to this.
But then, as you have cloud computing, now everyone has access to it on a rental basis.
But if the internet speeds are too low, no one can really take advantage of this and the bias is towards people with that physical hardware.
5G enables this to spread.
And so a lot of the kinds of technologies we want to see make an impact in the world can't really do it as much unless there is 5G infrastructure.
So 5G, because of the latency issue that pretty much goes away,
That will allow us, we've talked about doctors performing surgeries around the world with a robot.
That 5G technology, as long as everybody has it,
allows that doctor to go in and do that surgery now, correct?
Yeah, it allows
anything that requires use of something over large spaces would be much easier and more efficient with 5G technology.
And so right now we still have...
you need to have something physical and you need to be in the room for a lot of things to occur because the internet is slow and not as reliable.
Right.
Let me ask you this.
It's my understanding that 5G makes self-driving cars much more of a reality.
Absolutely.
Because
is
my understanding, and help me out if I'm wrong.
Right now, we think the car just needs to know where it's going and what's in front of it.
But the way it's really imagined is it will know
It will connect with everything around it.
So it will know who's in the car now.
You won't know, but the car will know who's in the car next to you, who's in the car in front of you, behind you, on the sidewalk, et cetera, et cetera, because eventually it will make the decision of who's the best, what's the best way to go.
Well, we have to be careful with the word no.
It will make a judgment.
It'll make a moral machine.
So when we have a self-driving car, it doesn't actually see around it.
Computers can't really understand the world or represent it the way humans do.
Right, right, right.
And so the way it has to work is it's pinging off everything around it and creates a network
and it makes decisions based on that network.
If we didn't have 5G and we have a low penetration of self-driving cars, so only a couple of people have it, like the people on Tesla autopilot, we're not taking advantage of the revolutionary potential of this technology.
Because if you think about it, what's one of the reasons why traffic is so horrible in most cities?
It's because stoplights and turns are really inefficient.
Because every time one person makes a turn or one person stops, it's not just everyone stops immediately.
They stop slightly slower.
And this piles up and this makes the entire grid very inefficient.
Self-driving cars, they don't have to worry about that.
They can like with millimeters of difference, understand how far the other car is.
And that requires that connectivity.
And then beyond that, if you have this interconnection between cars we can allow cars to work constantly and if we can do that we don't need as much parking space as we use right now and parking space is one of the biggest wasted spaces that we have in this country and if we can free that up we can build lots of more things we can make cities denser we can build more parks we can make people's lives more fulfilling if we didn't need to waste that space on parking.
And so 5G is crucial for ensuring that that technology is safe and reliable and has that kind of revolutionary potential.
When do you think that becomes a reality?
So
the big issues with self-driving car right now, part of it is just technological.
They make mistakes still, and we need better we need more data to be collected from test drives.
But a lot of that stuff is policy-based.
Our infrastructure is just not optimized for these cars to be as present as they are.
We don't really understand what the best liability rules are for these cars.
And so
these risks based on
already existing rules are what hold people back.
If we can start thinking about, hey, how do we attach liability well for self-driving cars?
How do we govern their use on the roads?
How do we respond to these companies and help invest in the right infrastructure to make these more of a reality?
We can accelerate their deployment pretty quickly.
I want to go back to 5G in a second, but let me stay on cars for a second.
How far away do you think we are from AGI?
So I personally do not think that this is a possibility.
Really?
Yeah.
So when we're talking about artificial general intelligence, so that's the idea that a machine can perform any task a human being can,
at least at human level.
That requires an understanding of the world, an understanding of concepts of causality, an understanding of being able to abstract and reason the way we do and have conversations about purely abstract topics.
Machines can't do these things.
They really can't now.
So we can talk about it on two points then.
On one is, do we think that the the current techniques of AI will lead to this general intelligence?
The current major technique is something called deep learning, which uses a lot of data, processes it, comes up with all these correlations, sees patterns.
If you believe that that's all the human brain does, maybe that can lead to AGI.
I firmly disagree that that's all the human brain does.
But when we think of what it means to be human and how human beings think in the world, it's more complicated than just our brain looks at at things and makes a decision.
We have bodies that understand the environment we're in.
We respond to our environments really well.
We understand the thoughts happening in other people so we can communicate with them.
This is a level of reasoning complexity that I do not think a machine will ever be able to do.
You don't think we'll even make AGI, let alone ASI.
So like the super intelligence idea is about an intelligence explosion, that once you have a machine that can self-improve itself to human level,
there's nothing stopping it from quickly going beyond to a level that
it can do anything conceivable.
But
if you can't,
I deny the idea that human consciousness and understanding are so easily reduced to
machine capabilities.
What are you saying cannot be replicated?
So the kind of, let's say, the idea of an artificial general intelligence relies on this idea,
Alan Turin's theorem of computation.
That anything that can be formally modeled mathematically can be processed and done by a computer.
I do not think human consciousness can be formally modeled mathematically.
I do not think that the human mind and what it means to be a reasoning agent in the world is just about processing.
I may be wrong.
These are my philosophical beliefs on the matter.
But it's clear to me that what we do and what it means to be human involves so many components and so much complexity that it can't be reduced to simply learning from data or an agent
being programmed to execute some policy decisions.
It means a lot to be human.
Going back to Aristotle, we're political animals.
We understand things socially and our minds are far more than just interacting with the world.
They're interacting with other people.
They're interacting with levels of abstractions that can't be formally understood.
And that level of reasoning, I do not think a machine could ever do.
So I tend to agree with you,
which
makes me fearful of people like Ray Kurzweil, because he does think that it's just a pattern.
It's just a pattern.
And I do think that you could put a pattern together that is so good that people will say, yeah, well, that's life.
And no,
it's not.
It's a machine.
It's not life.
But Ray will tell you that
by 2030, we'll be able to download your experiences and everything else and you'll live forever.
And as I explained to him, no, that's not me, Ray.
That's a box.
It's a machine.
But there are those that believe that that's all we are.
Yeah.
So that kind of like Carlswell's Transhumanist beliefs, I think that's a somewhat separate and I think kind of an insane set of beliefs.
It relies on this philosophy,
goes back to Descartes, you know, the evil demon experiment.
It's this idea that we can remove our brains and exist in a vat and that would be us.
I don't think that that's the case.
There's been quite a lot of philosophers who have made very compelling arguments about why that just doesn't make sense as a theory of human minds.
Two that jumped to mind are Saul Kripke and Hillary Putnam, which, if anyone has the details.
You're the only person that I've ever met that has mentioned Saul Kripke.
I've mentioned Saul Kripke to some of the smartest people I know, and they're all like, I don't know who that is.
I've never read it.
It's wild.
Yeah, well, he was.
In his book, Naming a Necessity,
he makes
a long argument that's a very technical, mundane point about something called a posteriori necessity, that if we find out water is H2O, that must be the case.
That's what he's trying to do.
And then at the end, he's like, so my proof proves that the mind cannot be the brain.
And it's like a little line in it.
But that was kind of like...
mind-blowing to me when I first came across it and it's shaped a lot of my views that the mind and the brain are not reducible to each other.
And so that kind of transhumanist view that we can upload your consciousness because we can map the neural patterns on your brain, it doesn't make sense to me.
So I think we're on the same page because I have a problem with
I'm also a spiritual being
and the choices that I have made in my life, the changes, the big pivot points have been spiritual.
And if
If you're just taking my pattern, that's who I am now.
But just like when
you're finding my pattern on Twitter and we found it goes darker and darker and darker, you know, as
an algorithm tries to recreate my voice or anybody's voice, I think the same thing would happen.
There would be a decay of that because you wouldn't have those
little things that are innately human, that are spiritual, maybe I would describe them in nature, that is a pull to be better,
you know, that is a course changer.
I mean, how could you find that pattern?
Well, to me, that's that's, I think, one of the things that can't be programmed, which is that human beings have this desire.
And I think that comes from the spiritual side that you're talking about.
We have a desire to know.
We have a desire to find meaning.
We're pulled by desires to do things in life.
Now, they can be pulled to bad things.
They can be pulled to good things.
But we are pulled by desires.
Machines don't really have desires.
They don't have the
inherent bias towards survival or self-improvement or anything like that.
Any desire it has is because a programmer has asked it to do something or it's embedded to do something.
It's not autonomous in what we're talking about when we talk about AI in the world today.
Autonomous doesn't mean it reasons on its own or it comes up with its own goals.
Autonomous means it can execute on human goals without us telling it what to do.
I say often and maybe correct me if I'm wrong, don't fear the machine.
Don't fear even the
algorithm.
Fear the goal that's put into it because it will execute it perfectly.
It will go on that goal.
So what are you teaching it?
Yeah.
And I think this is the point, which is like, even if I don't believe that AGI or superintelligence are possible, a lot of those safety concerns that researchers who do believe it's possible are thinking about,
one of them is something called AI alignment.
How do I ensure that what the algorithm does is what I want it to do?
These are still valuable things to think about and work on because if we're giving, if we're embedding these techniques into really serious infrastructure and decisions that can impact millions of lives, we want to make sure that when we ask it to do something, it does what we've actually asked it to do and not misinterpret it.
So the concerns that people in that community who do have these views on super intelligence have are still valid concerns.
But also we can just have a view where we're like, there are certain things which are very important to us.
We want a human in the loop to make that decision.
And that's also just a policy decision that we make.
We don't want to give AI access to the nuclear launch codes because what if it makes a mistake?
Well, what if the president makes a mistake?
But
we have a little more trust that a human being isn't that irrational, right?
And so
those kinds of checks will help us ensure that we put these things in places where the payoff is great and the risk is not existential.
So, our Pentagon right now is
perfecting AI to the point to being able to see who the aggressor is in a crowd.
If there's a mob and they're all fighting, it can reduce the image to the aggressors, you know, and the ones being beaten the way they're moving and cowering,
they're obviously the oppressed.
And it can analyze a a scene, and then you can tell it, you know, get rid of the aggressors.
That's the idea behind it.
So far, we have said there has to be someone in the loop with a kill switch.
It's the opposite of the kill switch.
Usually it stops the machine.
This one allows the machine to execute.
But that's America.
Well, I think there is no law in the United States that actually says you need to keep a human in the loop loop for military decisions.
No,
yeah, I'm not sure.
This is what they say.
This is what they say.
Do you think that they're not doing that?
No, so I'm just saying that this is we
just don't have the technology to allow it to
kill on its own yet.
We haven't programmed it to do that.
Should we?
But it's not a legal barrier.
I have complicated views on autonomous weapons.
To me, I think the laws of war are pretty ethical.
When we have just war theory and we have the Geneva Convention.
We're not teaching just war theory anymore.
So we have like a
body of military literature that teaches you ethical combat.
And I think those standards are pretty high.
The problem is,
if you're in a combat situation and it's a do-or-die situation,
you're not thinking through those combat procedures always.
And also,
when you're in really tight-knit military platoons,
you have an incentive to cover for your colleagues if they violate some rule, because that's the camaraderie you build.
So, a lot of the unethical things that happen during war are down to human error.
And I find
we can have a robot internalize our very good rules pretty well.
And I think robots deployed alongside humans would really improve the accuracy of targeting.
They would reduce unintended casualties.
To what extent we want to remove humans from the battlefield and just let wars fight, that's a little bit more.
Yeah, because you're not, I mean, because if you don't believe in AGI, if you don't believe that it
can take on a, and I'm not saying,
you know, go back to spiritual machines.
At some point, a machine will say,
don't, don't leave me.
Don't turn me off.
I'm lonely.
I'm this.
I'm that.
And it could believe that it's alive.
You don't believe that.
I doubt it, no.
Well,
a lot of really smart people do believe that.
And if they,
at that point,
you don't want to have taught anything to kill humans.
It's pretty good
reasoning to have.
And I think that that that kind of shows why the uh autonomous weapons conversation is more complex than a simple yes or a simple no.
I don't I don't like a lot of the um
autonomous weapons are bad by virtue of we don't want to take a human uh decision-making away kind of argument.
I think they can do a lot of good.
The policies that we enact for them
are dependent on
when we're saying autonomous to what degree of authority do we mean specifically in a very narrow targeted situation because i don't want a robot making the decision of a general but maybe the robot making decision of a soldier in a combat situation isn't as bad um
maybe the drone strike where we are saying that here is our um a terrorist encampment Here are all the details about it.
Once you found it and you know that you're not violating all these other rules, let the drone fire.
It's different than teaching a system manning all the robots for the military to know how to kill humans.
Like, that Skynet scenario is very different than these targeted scenarios.
So,
Elon Musk is concerned.
I mean, I saw a speech where he said the only thing that gives him hope is thinking about getting off to Mars and getting off this planet.
You have
Bill Gates, Stephen Hawking.
Stephen Hawking, Hawking, I think, was grossly misunderstood when he said humans will be extinct by 2025.
He didn't mean that humans are all going to die.
He just meant that we are going to be upgraded and merge.
Do you believe that?
No,
I understand the risks that a lot of these people are fearing.
I do not believe that human beings are going to be upgraded or merged with a machine.
What would you call the experiments that are being done now with robotics and bionics to where you think about moving your arm and that new arm moves?
So the fact that we can do certain things does not mean that we will.
I was pretty happy to see that when in China a rogue scientist injected two fetuses with CRISPR to try to remove
the gene that would make them able to contract HIV, even though that was totally unnecessary.
I was impressed that the international community condemned that man, saying that that is not something that we think we can do.
We should not edit humans on the germline.
These kinds of ethical and policy restrictions on what we're allowed to do with technologies
give us hope that we won't go down the path of human enhancement.
And I don't want us to go down a human enhancement path on any way, because you can frame it in the sense of human choice.
I'm just making my child slightly better, or I'm giving myself a cooler arm.
The second someone does it, they're much better than everyone else, so everyone's got to do it.
Right.
And so
that's such a slippery slope that I don't want that to happen at all.
That was Ray Kurzweil's point.
And it would become so common that it would be so cheap that everybody would do it.
I mean, who wouldn't want to do it?
Well, I wouldn't want to do it.
I've seen arguments by philosophers who say,
once we can genetically upgrade your children, it's immoral for you not to genetically upgrade your children.
You'll be a bad parent if you don't genetically upgrade them.
Yeah.
Because everybody will be so far ahead.
You just don't think that's going to happen.
I think if there's any,
here's my faith in humanity.
If there's any decency among lawmakers and the like, they will not allow that to happen.
And the ethical community will understand the limits on you can use gene editing on animals.
You can use it to save people.
If that's the last case scenario, we can help a lot of people live without life-threatening conditions, but to do like designer babies and the like, that's where we would draw a line.
Let me, one more question on this, and that is right now,
I think it's Iceland in Reykjavik.
They say they have a limited, eliminated Down syndrome,
and that's just
because they can test for it and kill them.
As a father of a child with special needs, I'm really against
getting rid of
cerebral palsy or Down syndrome.
Where do you think we're headed on that one?
I think that that's one of the reasons why when I said
intervention on a child to remove a life-threatening condition, it needs to be as a last resort.
Because if we did that so that any child with any disease whatsoever, we remove it, even if there's good treatment available, what occurs as a result is no one's going to invest in helping the people who are already living with that condition.
And I think that that's worrisome both in the fact that, okay, you're treating these people who are living with a condition worse off medically, but more on an ethical level where you see people who are diseased as less human.
This would change our perception of
what gives someone dignity or worth.
And I don't think that anyone just because they're disabled has less dignity.
And so if we if we have that
and would he have been the same man if he didn't have polio?
Probably not.
No.
Our hardships, even if you go to Teddy Roosevelt, his hardships as a young person made him who he was when he grew up.
And I think if we have this view that,
oh, your child is going to be sick, let's completely change your child's genetic makeup to make him healthy so that your child lives a higher quality life.
It's,
it deprives them of that feeling of,
I would go back to dignity because our hardships and our struggles make us more dignified.
I think a lot of people don't have the ethical view that I have.
They want us to just live happy lives without having to struggle for it.
I don't know if you could ever be your highest self.
Yes, I think that would pacify a lot of people.
It would take away from a lot of the
triumph of the spirit.
Absolutely.
And it goes back to what you were saying earlier about the Brave New World thing.
If we just wanted to live happy lives without struggling, we could do that.
It just wouldn't be as satisfying, I think.
Let's talk about medicine a little deeper.
What do you think is coming?
I saw a report
that was from Goldman Sachs, and
I don't
fault Goldman Sachs for saying this.
This is their job.
Their job is to advise people on is that a good investment or not?
And they were looking at the investments of medicine that actually wipes diseases out.
And they say it's a really good investment for the first five years.
And then as the disease goes away, the return on the investment is horrible.
And so they were saying, as we start to advance,
should people, should we recommend that people invest in these things unless they're just do-gooders?
Okay.
We are going to start to have these kind of massive ethical problems, are we not?
Or questions?
Well,
this is the reason why I think most of the world's really happy that we're not relying on banks to fund all medical research.
But
for them, that might change their business model.
And I think that I'm sure that the person who said that got some reprimand from his higher-ups for letting people know that.
But yeah, even if they think that it's a bad investment for them, that doesn't mean we as a society think it's a bad investment and we'll figure out investment vehicles to fund these types of medicine.
And you see a lot of people coming up with ways to do drug discovery and medical treatment that could potentially figure out cures for things, but the process makes money.
So take, for example, the application of artificial intelligence to drug discovery.
When we're doing medical trials and the like, we produce vast amounts of data.
The medical literature is huge.
No human being could ever dream to read it.
And so there's a lot of failed medications in history, which probably work really well, and we just don't know it.
So if we apply these statistical algorithms, go through all these papers human can't read, we can find out: hey, here's a new cocktail that we try, and it'll work for this person.
If you cure that person, you're not charging that person anymore, but a drug company might want to pay this company to help them save on their costs of RD, right?
And it changes the dynamics of how they're selling products, and everyone's kind of benefiting.
And so, that's still a technology leading to better cures, but it's not this finance-driven way with the old business model of how we're selling drugs.
And figuring out new business models, I think, is a more crucial question than how do we make it appealing for Goldman Sachs to invest in it.
What do you think is most likely on the horizon in medicine?
So personalized medicine, I think, is probably
going to be the bigger breakthroughs in the coming years.
And the reason why is we usually go through like animal testing stages and then human trial stages and then product comes to market
animal testing is more or less useless because the distribution on what works in a rat and what works in a human is more or less random it doesn't it'll it'll tell you if this is harmful it doesn't tell you if it works
and so we waste a lot of money on that
I know like Alexander Fleming, for example, developed penicillin.
He's like, if I had to test it on animals to get it to work, I would have never come to market kind of uh because it just didn't work on the initial tests on animals at all.
Um,
but if we if we now look at the new technologies coming out, we've decoded the human genome much better, we can understand you, your DNA much better, we can understand the history of all the drug cocktails we've ever made much better, we can try to do some matching and see, hey, here's some trials we'll run on you as an individual to help your, like to tailor to your specific medical needs, and that would really revolutionize care because the way doctors prescribe things right now is based on averages.
And so you as an individual meet most of the symptoms for this disease.
I think you have this.
There's a high probability that you could have a rare condition.
Most people don't.
Those are kind of off to the side.
Most people do have the average condition.
But if we could get it down to that individual level, think of how many lives we could save as a result.
So
that brings me to, again, one of the massive changes that are coming.
Insurance,
people don't really understand insurance, I think, or they don't want to because they see it as a big cash cow.
You know, I've got my car.
Well, I'm going to get that check and maybe I'll fix my car a little less and I'll take this money.
And they don't understand that insurance is not a guaranteed thing.
Insurance works because it's a gamble.
You know, the insurance company is saying if I, if I, if I bet on enough people that they're going to be well,
only a few of them are going to be sick.
But the collection of data now and with DNA testing, et cetera, et cetera,
the goal of all of this data is certainty,
you know, that we can get as close to certain as we can.
How would insurance work?
So I think, okay, when we're talking about insurance, there's a lot of reasons why we shouldn't allow
these kinds of automated decision-making in insurance and using vast quantities of data because
it'll take all this data from a ton of people and it'll figure out connections on how to predict whether someone will pay it back.
We don't know what it actually picked out.
A lot of that's kind of inscrutable.
And I don't think we should have like explainability requirements like they have in the EU simply because we know the stuff that's better at prediction is the stuff that's harder to understand.
But when it comes to insurance, explainability is more important than prediction.
Insurance is not simply a prediction thing.
People don't want to know that they got denied because the computer said it.
They want to know why I got denied.
And so
things that are good at prediction work in a lot of domains.
They work in in medicine, for example.
If I have your radiology test, I simply want to look at the image and say, is that cancer or is that not cancer?
I don't need to explain to you why and you don't care why.
I can use an algorithm, and your life is better.
In insurance, it's not about prediction.
Prediction is a part of it, but it's about you understanding what you're getting into and that relationship with the customer.
And we shouldn't try to reduce it to a prediction decision.
And that's...
a reason why we need to have legal rules on what insurance is allowed to do.
And we might have to think about different models for insurance that incentivize care better.
Tessis, I mean, as I'm listening to you, I keep thinking, you know, I disagree with you on some things, but I keep thinking, yes, yes, that's the conversation that we shouldn't be having.
Tell me the person in Washington is having any of these conversations.
Well, so, yeah, with insurance, it's a complicated thing because insurance throughout most of history was done on a local community level, and that makes a lot of sense.
If we all just pool in our money, it's a community, and whoever gets sick, we pay for it.
Everyone makes sure everyone else is healthy because no one wants to pay out.
Those kinds of models where
you're a shareholder as a purchaser of the insurance, I think are much better for the kinds of like data that we have now.
It would make everyone be incentivized to be healthier and be wiser in their decisions and they can really understand better how do I make those wise decisions.
That's not the kinds of insurance models we do have.
They're very centralized.
They're by big companies.
And we're talking about even more centralized.
Yeah.
And so
There should be a political conversation.
How do we regulate insurance differently to encourage people to to be more knowledgeable about their plans and to incentivize?
Whether this is something that anyone in Washington is having, I don't know.
I doubt it.
But are you seeing anybody that's having a there's one candidate, he's a Democrat, who's talking about basic minimum income.
I am dead set against basic minimum income, but I think people have to have the conversation, the mental exercise, because there are people that are going to be saying 30% unemployment.
Now, whether that happens or not, I don't know, but there are going to be experts that will say that's coming, and a lot of people may be unemployed.
We hit a massive recession.
You're going to hear people talk about basic minimum income.
We're not even having that conversation.
Yeah, I've spoken to Andrew Young before, and what I liked is, so I too disagree with universal basic income proposals,
mainly because no one really proposes them as a way to replace our social safety systems.
It's kind of like an additional
which is
very unsustainable.
But I agree with you.
I do like the fact that he's one of the few people having that conversation.
Correct.
And we do need to be more forward thinking.
And I've commented on this before, which is we actually see more of these daring thinking on the left, which is sad than on the right.
There's too much.
It's still old
thinking.
In a way,
I think a lot of the new thinking is like wrong fundamentally,
but it is new in the sense that it's trying to grapple with the new challenges.
You see this resurgent antitrust movement on the left.
And okay, you can say it's old because antitrust is an old measure, but it's new in the sense that it's saying, hey, antitrust needs importance now now because of digital concerns, the way the digital markets work.
It's winner-take-most markets.
So it's new thinking in the sense that we are thinking about how to deal with new problems.
I see very little discussion on the right of how do we grapple with the digital economy.
And these are important conversations, and I think that we need to have models to understand how to best
deal with the digital world in a way that makes people better off.
I've seen a few people on the right.
The Information Technology and Innovation Foundation,
ITIF, they publish a book
recently called
About Why Big Business is Good and trying to dismantle this belief that all the dynamism in an economy comes from small business.
And it's a really interesting approach on
on a right-wing view that a country needs an industrial strategy
for it to leverage technology benefits.
And I think that's a classically conservative view as well, that we need our country to be able to understand
what its resources are and to be nationally competitive on the global stage.
But
as a chorus of conservative voices, they're in a minority.
Let's talk about the digital economy.
And
what are we going to be impacted with first
in the digital economy?
What is going to be the
biggest, the first thing that comes to us that we go, oh,
we should have talked about that.
I think people are already grappling it with the biggest change of the digital economy is the complete change in how media works.
Social media is very different than news media, which is very different than print media, like television media, which is different than print media.
And I think people are realizing this.
People started to realize this after the 2016 election, I think.
That's when they first realized that the game is different now.
And we still haven't fully understood what it is that social media.
I don't think we're having any even.
Tell me the deep conversations, the philosophical conversations that you have heard that penetrate.
So I think the best, well, these conversations were actually happening from the dawn of the internet.
They just kind of lost their prominence now.
I think it actually goes back to even before the internet.
Marshall McLuhan, who is kind of the father of media theory, wrote a book called The Gutenberg Man.
And in it, he said that before the printing press, human beings were oral.
We told stories.
And who was important as a result are politicians, military leaders, religious figures.
Why?
Because they're the best at communicating, grabbing your attention on what matters.
Our society was organized along this hierarchical kind of understanding of who's on top, what's your place.
That's how oral cultures function.
Then when you get to printing and you move to a visual culture and everyone can read and you completely change what you're listening to, you have this explosion in arts and sciences.
The person who's the best orator is no longer the most famous.
The person who tells the best story is now the most famous.
And that's a scientist or a writer.
And he's like, he analyzed that this led to individualism.
This led to people demanding democracy because they felt empowered as individuals.
He said, when you move to the cable companies, you're going back to an oral culture.
You're going back to, I understand, my trusted source for things.
Now, when you go, and
what surprises me is this is even before we had social media, he said, but when you have the global village was his term, when you have these people that are so interconnected and you break down barriers of the place and the class, they're going to have a return to identity is what's going to matter.
I'm going to need my group
for me to parse this vast amount of information and to make us think in a way that I can understand rather than having to read everything because that's impossible, rather than having to go through all the various voices, which is impossible.
I need this filter and this kind of shared group identity that creates this reputation.
I think that's exactly what we're seeing.
Yeah.
We're tribes.
We are literally tribes.
You find a group of people that generally you agree with, that see the world the same way.
And
we now are tending to believe that each tribe is like, they're coming to get us.
But I mean, how else would you function?
But yeah, and so the question then is not, I think there is a, again, the
approach of people who want to go back are like, tribalism is bad.
Let's go back to thinking for yourself.
It's very hard to read everything on the internet.
It's really hard to know what's trustworthy, right?
And so you can't go back to this naive view.
Read the books.
You'll learn for yourself.
So how do you do it?
I think we need to find a way for tribes to interact peaceably, for us to understand, here's my view of the world, here's your view of the world.
Let's negotiate as groups.
rather than as individuals.
And
that's just a new way that an interconnected society has to live because there's too much information out there.
And
we're splitting ourselves into so many tribes.
And seemingly all of them are saying the same thing, my way or the highway.
Well, it's, I think,
good ways of kind of getting by.
So we don't want tribes to be.
collectivist in the sense that my entire identity is my tribes, but we can't go back to individualists, like I'm, I have no shared tribal group.
We need something that's more fluid, that I understand I belong to this category of groups.
I see myself as this way with these people and this way with these people.
And us as a group, we interact with this group in this way.
And you get this web of interrelations.
And if we see our identities as more fluid and come up with mechanisms that respect both individuals and groups,
I think the best
kind of political theory in history to think of this is, I don't know if you know much of JK Chester GK Chesterton
and in in in Catholic social teaching there's a term called subsidiarity where
you are individual but our most basic unit of decision making is our household and so that's a tribe in a way it's a collective body how we should think of society is decisions about most things should be made on the level of the household decisions that can't be made at the household are made at the local community level and then decisions that can't be made move up and And it's moving up bottom up, and then it moves back top-down.
They feed both directions.
We don't have a political organization that has this diversity of decision-making.
Well, we do.
We just haven't used it in over 100 years.
We haven't used it for quite some time.
But
that was the premise.
I don't think the founders have been
more genius than right now.
I mean,
everybody says, oh, they couldn't see this coming.
Well, no, they couldn't have seen this coming.
They couldn't have.
But doesn't that make it more genius?
Because as we are becoming more tribal,
as we are
living in pockets of our own kind of tribes,
we don't have to separate.
We just have to agree to live with one another and not rule over one another.
And that was the point of America.
And it's never been more apparent how far ahead of their time than right now.
Well, yeah, you go back to what was the vibrancy of early America.
It's most of the stuff was done by these various civil society organizations,
these local groups interacting with each other.
There is a need now more than there was then for a central government to do things, but it's about delineating correct responsibilities.
There are things that only a central government can do.
There's a lot of things that local governments can do and things outside of government can do that we need to talk about getting their responsibilities on track.
And I think, at least in recent memory, the conversation among conservatives was this very simplistic: government is bad, private companies are good, and
it's more complicated than that.
Not all private companies are good.
The government's not always bad.
You know, I'm a big fan of Winston Churchill, and
I've read so much much on him and I just love him, love him, love him, love him.
Then I decided to read about Winston Churchill in India from the Indian perspective.
He's not that big, fat, lovable guy that everybody thinks, you know what I mean?
He's a monster there.
And I think what we've done is we are trying to put people into boxes where they don't fit.
I struggled for a while.
So is he a good guy or is he a bad guy?
Then it dawned on me.
He's both.
He's both and and and we have to understand
government is not all bad it's bad and good
people companies not all bad bad and good and we just have to it's it's which way is it growing is it growing more dark or is it growing more light
and I think
Did you ever read Stephen, no,
Carl Sagan's book, Demon Haunted World?
I have not.
No.
He talks about there will come a time, this is in early 90s, there'll come a time when
things are going to be so far beyond the average person's understanding with technology that it'll almost become like magic.
And if we're not careful, those people will be the new masters, you know, they'll be the new high priests that they can make this magic happen.
And
I think that's what I fear somewhat is these giant corporations.
I've always been for corporations, but now I'm starting to think,
you know, these corporations are so powerful.
They're spending so much money in Washington.
They're getting all kinds of special deals and special breaks, and they're accumulating so much power.
I could see for the first time in my life, Blade Runner.
I've never thought of that.
I've always looked at that and gone, that's ridiculous.
What is 2019?
But
we can even go back before Sagan, Eisenhower in his farewell address, not only talked about the military-industrial complex, but the scientific technological elite.
And that
to me is
the policy question because
it's about whether we've had this tendency to defer to experts for so long that it's eroded democracy.
And how do we
put this complex stuff back in the power of the public?
And I've seen some very interesting proposals about it.
There's an economist at Microsoft Research, his name is Glenn Weil, and he wrote a book last year called Radical Markets.
And it comes up with these fascinating mechanisms.
And what these mechanisms do is
you will have
technical decisions making made in executing decisions.
But you allow more democratic control on how people understand things and where their voice is represented.
And I'll give you a practical example.
He calls for something called data as labor.
And I'm a big proponent of this philosophy.
And the reason why is when we look at these large companies which have tons of data and which make a lot of money, The reason why is legally,
we really think of the value in the physical assets that they have.
The data is an input and the output is the physical things.
And so they own all the servers.
And so when you're operating on their site on top of their server, even if you're creating tons of value,
you have to accept the agreement that they've given you because I own the infrastructure here.
If we start treating data as an input of value,
You increase the bargaining power of people working on these sites and they can ask for more money and you take a pool of that income these companies have away Does that mean that everyone's going to get a lot more money?
No.
Like if you take all of Amazon's profit, even like down to they have no profits left and distribute it among Amazon's users, they get like 30 bucks each.
But when you reduce the level of profitability Amazon has itself and you
diffuse that bargaining power and that little bit of money to each person, you have a far more competitive landscape on top of their site.
You generate a lot more businesses on top of their site.
And you give a lot of those users a lot more power and an interest in what's happening.
And that generates not only a lot more economic activity, but it allows people to have the incentive to care about how to govern their interactions online.
It gives them a voice online.
Let me go back to 5G here for just a second and then we'll move on.
While we're here with government and corporations,
this week they were talking about
the government
just doing 5G, having it a government project.
I've talked to people about AI and
should we have a Manhattan project on AI?
If it can be done,
you know, we have to have it first because I don't want China having it first or Russia having it first.
Should the government be doing 5G?
So the Trump administration's approach to 5G is kind of, it's a little all over the place.
I think they understand it.
Well,
I don't know what their goals are.
And I'm going to put it like that.
Because
a year ago, you have these kinds of restrictions on what Chinese companies can sell in terms of 5G infrastructure and whether the
people in the national security,
like in with clearance, can use Huawei products.
All right, you say, I have national security concerns.
I don't want to use foreign companies.
Makes sense.
But now, even this week, he's like, I want to ban Huawei from working here.
And that's now really extreme.
It's going beyond that.
So in Canada, the CFO of Huawei
is facing extradition on alleged sanctions violations.
And he announces, I want to use this as a bargaining ship in the trade war.
Now that,
that's now politicizing what should be a national security conversation.
And when you do that politicizing, it's what do you want?
When you're banning this company, is it because you have national security concerns?
Is it because you're worried that we're behind on these technologies?
If you're worried that we're behind, help domestic companies compete, don't punish a foreign competitor.
Or is this, I want to punish China, which is you'll harm Americans to punish China without good cause?
And so I'm not clear what the goals are there.
And so I can't say whether it makes a lot of sense what they're doing, but I do think it's very erratic.
And I think that
this 5G announcement is part of this erraticism in which they say, hey, we don't want these Chinese companies having the lead.
We don't really want to do anything to make it more profitable for domestic companies to invest.
Let's just say the government will do it.
I doubt it's a well-thought-out plan.
I doubt they actually have the funding or the mechanics done.
Even with the American AI initiative, the executive order Trump announced for AI, there was very little, by the way, of how this money is going to come up.
So when it comes to the executive's approach to tech policy, I don't think that there's that vision or understanding of what we want.
Silicon Valley.
Silicon Valley and the government, I don't know.
I mean, it's like a clown car every time somebody goes to Washington and the clowns get up and they start questioning the guys in Silicon Valley.
I just don't have any faith that they have any idea what they're even talking about.
And they keep going back to old solutions about we have to regulate you.
I keep hearing about regulation.
We need to make sure that voices are heard.
I think that's the worst possible idea.
I think there's a
misunderstanding of a platform and a publisher.
And you can pick one, but you can't be both.
I have no problem with Facebook saying, yeah, we're changing the algorithm.
We're a private company.
We're changing the algorithm any way we want.
Okay.
But you should not have the protection of a platform.
So if we...
You brought up several points there.
You brought up both the technical illiteracy in Congress and the decisions being made by social media platforms.
When it comes to the technical literacy, I think
there is a need for more competency.
There is a model the United States used to have, and it got defunded in the 90s called the Office of Technological Assessment.
And that used to provide reports for
staffers who would read it
and then they would tell what congressmen should say when they go into hearings.
That doesn't really exist and the research capacities of Congress have basically been gutted for a while.
And that's why they seem so embarrassing when they go into these hearings.
And so, that's definitely one point.
Though I've been assured behind closed doors, they are more respectable than they are in these hearings.
They do want to get a good silent bite in, obviously.
When it comes to the social media platforms thing,
so what protects these social media platforms is something called Section 230 of the Communications Decency Act, that a platform is not liable for the content posted by its users.
Which was there for porn and copyright.
Yes,
mostly for copyright, I think, but probably probably porn as well.
But
that has allowed the internet to become what it is today.
Because think of how many small sites would just not be able to fight off the small, the lawsuits they would get.
Right.
If we remove that liability, you're not going to see Facebook become less censorious.
What you're going to see is them removing most content off their site because the task of content moderation is unbelievably complex and nobody has figured out how to do it efficiently.
And these people are learning.
They're making tons of mistakes while they do it, but they're responding to the fact that
they have so many diverse interests.
If I run my own, let's say I run a blog, right?
And
I get some users saying, we don't like your opinion.
I'll say, I don't care.
This is my blog.
Facebook has shareholders, it has its users, it has all these people are telling it, no, no, no, you have to do this for me.
And it's so hard for them to
execute that effectively if they're held liable on the content on top of it you're gonna see the amount of usage of Facebook shrink to like 10% of what it is today and so I do not think treating them like a publisher is the way to go whether we need to see
how do we incentivize new efforts in content moderation do we need maybe
um principles or guidelines on content moderation that everyone should operate in and then they can tweak within this framework for their own sites because obviously, we shouldn't have all of them moderating content the same way.
We want them to compete and come up with better rules.
But whether we should nudge them in a certain direction, maybe.
But treating them like a publisher is probably the worst approach I can think of.
Really?
Because it would decimate online activity.
Yeah,
except it's the rules that I have to abide by.
It's the rules that everybody else has to abide by.
There's the difference between how the New York Times operates and how Facebook operates.
Because the New York Times, you submit them an op-ed or something.
They have an editor review it and say, go ahead.
Facebook never gives you the initial go-ahead.
Right.
But what I'm asking for is, though, if you're a platform, what you're saying is, I'm just an auditorium.
I rent to anybody and everybody.
So unless it's illegal,
I've got to rent this to anybody.
You may not like who was in here the night before, but I'm an open auditorium.
I'm a platform for all.
I think
this is a misinterpretation of platform.
A platform doesn't mean it's allowing all voices or that it's showing them an equal regard.
All it's saying is it's not making a decision on
whether the content is allowed from the moment you post it.
They're not exercising editorial control over types of content.
But if their advertisers say, we don't want
content with nudity on it because we're not going to use your site anymore, as a platform, they can still understand that, all right, we want a platform where people can share their views and the like, but we don't want this type of content on it because that's harmful for everyone else on the platform.
Correct.
Okay.
So what's the solution?
Well, the solution, in my view, is simply allow
incentivize more competition online.
How?
Well, the data as labor proposal that I got back to, I mentioned earlier, allowing more bargaining rights for the users of these sites with the sites themselves will allow
not only more democracy in their governments, but will allow people to make small offshoots.
The problem of what happens right now with competitor sites is, and
they always go to the worst.
Whenever you have a content moderation saying, we won't allow this type of, we won't allow hate speech on our site, the type of site site that comes out for people who are like we'll allow anything there's maybe like three libertarians there and there's five thousand witches who go to that site right
so um so that's not the model that that usually works
you have however had successful switches for sites from my space to facebook and usually that happens because the site has made decisions that don't just anger like the small few.
They anger the majority of users on the site and they no longer like it.
And for some people, they think Facebook's going down the path.
But if we allow these sites to make mistakes, but also give the tools for their users to
have more bargaining power with them,
like change the way we treat data ownership,
you'd probably have a far more competitive space because these sites would kind of have to listen to people more, they would change a lot more, and then you would have more churn in who's on top.
Are you concerned about voices being snuffed out?
I am not.
I do not think that a lot of what is called censorship online is actually censorship.
I think it's just in the viable business decision for Facebook or Twitter to not allow certain people on.
And the the internet, more or less, is still a very open place where you can start up a website, you can post it, you can buy marketing tools.
You'll be excluded from a large platform, sure.
But that doesn't mean you're silenced.
I don't think we should have this expectation that I can rely on Facebook or YouTube to provide me my audience.
Because
they don't have to give me their service.
I agree.
Google is in a different place.
They change the algorithm and exclude you because they don't want to show those results.
They tinker with the results of the search engine.
That, I think, is a different.
So the algorithmic changes,
I think the most famous claim was that, you know, there's only two conservative sites that ever show up on Google News, which are generally
Fox News and the Wall Street Journal.
And the reason why is if you just look at the page view rankings of these sites, they're the only two large conservative sites.
The vast majority of conservative news media is small and fractured and competing with each other, whereas left-wing media tends to be more centralized, large stations.
And so the algorithm favors that.
I don't know if that's...
I would suspect that that's not a politically motivated decision, but I don't know if that's the case.
I'll give you that
caution in my statement.
But it makes sense for an algorithm based on your size and your prevalence to kind of demote conservative leaning sites.
Is there such a thing as privacy anymore?
Privacy is, I think, it's a topic where people have very strong views because they think it's older than it really is.
The right to privacy is about 100 years old.
The right to privacy came out because of the camera.
People were like, hey, I'm with my mistress in a park and you can now take a photo of me with her.
I don't I don't want that to be allowed so the right to privacy gets get blows up I think a lot of the concerns people have the media has over privacy isn't the concerns most people have over privacy I think a lot of the the the hype over we don't have enough privacy is by people in positions like in government or in media who have a lot of things to hide and they know that their their their position is based on on on privacy.
Whereas the average person is willing to trade most most of their information
for that improvement in their quality of life because we don't have much to hide.
I would agree with you,
except that information is, when it's total information in the hands of nefarious individuals,
you
they can make you look and
look any way they want if they have control of all your information and you don't have control of your information right and I think that this is the this is
what like I think we there's far more concern when you have a guy behind an NSA computer monitoring you
than there is when you have this aggregate pools of data at Google and Facebook
but I would love you to have more power over that data
and and this is why I think we do need to have conversations on what are the rights over data how do we classify data is it a a type of property?
Did you produce your data?
So, is it
your labor?
These are conversations we need to have
because people need to feel that they can have a greater stake in how their data is used.
This is not the same as saying, Let's just reduce data collection for privacy reasons, like they're doing in Europe, because I don't think that benefits a lot of people.
Most people don't know what a deep fake is.
I believe by 2020, by the time the election is over, everybody's going to know what a deep fake is.
So deep fakes, it's a tool based on a pretty recent version of
artificial intelligence called AGAN, a generative adversarial network that's able to develop new types of content.
It can create new data out of old data.
And
a lot of the applications that you see right now are in the video game.
zone you can create more realistic characters like higher resolution images
But you have a lot of positive views as well, because it could be applied to medicine, detecting weird anomalies, lots of security applications as well.
But you can also use it to
make it look like you said something you didn't say, or put you in a compromising video that you never participated in.
And it can look pretty realistic.
The really interesting thing about deepfakes is
the second that the first few came out on the internet and people realized how horrible this was, everybody responded.
You have a near kind of, and I know you were complaining that these companies have this censorious capacity, but you have this complete shutdown where we're not hosting these types of videos and we're not hosting you teaching people how to make them
across a lot of
sites.
I know DARPA is developing algorithms to be able to see the
when it's ingested, it'll warn deep fake and they want the they want Facebook and YouTube and everybody else to to run that algorithm yeah and so yeah there was yeah and it's not even just DARPA there's a lot lots of work coming out recently where
it's getting much better to even detect these before before people have started making them en masse.
There was a brief wave of these being made for putting celebrities and pornographic pornographic videos, but that got banned so quickly as something to do that that's really decreased.
And a lot of the ones that slip through the crack,
if we have these detection services that can prove that, they're covered under existing laws and cyber laws about harassment, identity theft, libel.
And so
you might
I like this idea that we're ramping up the ability to enforce existing laws by saying, hey, we can have the evidence that someone did this
and it needs to be taken down and we need to compensate you as a victim.
And we've responded to that really fast.
And that makes me optimistic that we can respond to some of the more extreme challenges as we're going in the future.
The one interesting thing that comes along with deep fakes, though, and
it hasn't been done yet, but I feel someone will do this as an experiment one day.
You can fake an entire news event now.
You can generate people that don't exist.
You can generate landscapes that don't exist.
You can generate audio that no one's actually said and try to come up with a scenario.
And I think that would be a warning shot if someone did this to try to like game Twitter and convince everyone that this is real.
And it would show that we need to have some good regulatory approaches on identification.
It is War of the Worlds, 1929.
That went the very next day Congress was talking about what do we do about radio, this powerful medium that's all over the country that could spread panic.
So it's in a way history repeating itself.
And so, yeah,
the developments you have historically are just
be more transparent about what you are, label things well.
And we have the tools to allow us to do this.
It's the policy that's behind.
It's the regulatory approach we're taking that's behind.
It's been great to talk to you.
It's very great.
Great to talk to you as well, Glenn.
Just a reminder, I'd love you to rate and subscribe to the podcast and pass this on to a friend so it can be discovered by other people.