"MIT Professor Max Tegmark: LIVE in Boston"

47m
Find out more about mechanical cats doing Cats on Broadway… with our esteemed surprise guest: physicist, cosmologist, and machine-learning researcher Max Tegmark, LIVE in Boston.

(Recorded on February 04, 2022)

Press play and read along

Runtime: 47m

Transcript

Speaker 1 AutoTrader is powered by AutoIntelligence. Their tools sync with your exact budget and preferences to tailor the entire car shopping experience to you.

Speaker 1 Want a pink mid-size SUV with 22-inch rims and a V8? They got it. Nothing's too specific.

Speaker 1 Find your dream car at the right price in no time because AutoTrader, powered by Auto Intelligence, puts you in control of the whole experience from search to close.

Speaker 1 It's the totally you way to buy a car. Visit autotrader.com to find your perfect ride.

Speaker 1 Nobody wants to spend the holiday season clicking from one site to the next to get their hands on the best brands. But who knew Walmart has the top brands we all love?

Speaker 1 Like the big names that your friends and family actually want and all in one place. Nespresso, Nespresso, Nintendo, Apple, you name it.
Get the brands everyone loves at prices you'll love at Walmart.

Speaker 1 Who knew? Go to walmart.com or download the app to get all your gifts this season.

Speaker 2 This is officially a cold open, I guess. This is a cold open.
Right? This is like a...

Speaker 3 Like on the podcast, we just kind of talk before we say what the name of it is.

Speaker 2 We're going to do like an intro and then we're going to be like

Speaker 2 professional. Oh, yeah.
And like, oh,

Speaker 2 we usually

Speaker 2 tap into this.

Speaker 2 That's okay, just say the only thing that you know how to say.

Speaker 2 Welcome to Smart Bless. Smart Black.

Speaker 2 Smart

Speaker 2 Bless.

Speaker 2 Smart

Speaker 2 bless.

Speaker 2 We're so happy to be in Boston. We are so happy

Speaker 2 there tonight.

Speaker 2 And you guys, you guys rolled out your nicest weather for us. Yeah.

Speaker 2 Felt like being home in Canada almost.

Speaker 3 But thank you, thank you, thank you not only for listening to our garbage, but coming out and looking at our garbage.

Speaker 2 That's a better way to say that.

Speaker 2 That's a better way to say that.

Speaker 2 So, all right, let's sit down. Yeah, Yeah, we're just sitting down.
Sit down.

Speaker 2 So,

Speaker 2 oh, wait, somebody left their phone here.

Speaker 3 That's me. I brought it.
It's in my pocket. I don't know why.
If I do get a call, though, we're just going to take a quick break, okay?

Speaker 2 Is that your phone?

Speaker 3 It is mine. Sorry.

Speaker 2 The camera was on, too, Granddad.

Speaker 3 Yeah,

Speaker 3 my flash was on.

Speaker 2 Here, put it right over here. And I'm only saying I'm chewing gum.
It's a really professional operation here.

Speaker 2 All right. So.
Let me ask you something. Yes.
Do you guys, how is traveling going for you with all of this? Great, great interview question.

Speaker 3 So we did a little preparation.

Speaker 3 No.

Speaker 2 No, I really want to know that because as of this morning,

Speaker 2 Will started calling me Katy Perry

Speaker 2 because I bring so many outfits with me.

Speaker 2 Now this is the best you could use. So it was just, it was just, I know.

Speaker 2 Well, the true story is

Speaker 2 the sweet girl who picks out my clothes for me because I don't know how to do that,

Speaker 2 clearly.

Speaker 2 She sent me, she sent over, God love her, she's a really close friend, but she sent me over two of the exact same outfits today. Right.
So I kind of put this together willy-nilly.

Speaker 3 It is, it is a new thing for us because we, you know, we do this thing on our laptops and we don't, we're together, we're together, we're together. We wear pajamas.
Yeah.

Speaker 3 So this whole notion of having to dress

Speaker 3 and actually have a specific time and all that stuff, it is odd. Yeah.

Speaker 2 And that there's people here. Yeah, yeah.
And

Speaker 2 usually one of us is a a few minutes late.

Speaker 3 I am commonly late.

Speaker 2 Oh, that's you.

Speaker 3 I am. Okay.
But here's it. I don't like to be, if you're early,

Speaker 3 you've wasted time, right?

Speaker 3 You have an issue with you.

Speaker 2 I know, I've just, we've got a lot of milling around going on there. By the way, I'm interested.

Speaker 2 I go to a doctor's office now late, so I don't have to wait the 20 minutes they make me wait. Right.
I just go, right?

Speaker 2 By the way, you still have to wait. But do they like that? They love me for that.

Speaker 3 That sounds familiar. I feel like we talked about this, maybe.

Speaker 2 I don't think we've done that. I don't think we've ever done that.
I don't think we've ever. Oh, I don't ever.
Wow, wow, wow,

Speaker 2 you're never early or late, are you?

Speaker 3 No, I'm right on time. You're always right on time.
What is that?

Speaker 2 Just because

Speaker 2 I respect you.

Speaker 3 The implication is I don't know.

Speaker 2 Here's the other thing I discovered the other day. Go, what were you going to say? Well, no, I was just going to say about calling you Katy Perry.
It wasn't just that.

Speaker 2 I just kept going, I don't know, ask Katy Perry over here. Like that.
So it was just bully. And then we we all have, you're going to make me roar in our head.
Yeah.

Speaker 2 And Jason, you've been singing that all day. I can't.

Speaker 3 I'm so, I can't.

Speaker 3 Is everybody, when you get a song in your head, you can't get it out? Am I the only one like that? But it lasts an abnormal amount of time. Like a week, I'll get something stuck in my head.

Speaker 2 They say, like, to count down backwards from a certain amount of number, and that will make you stop thinking of the, are you doing it right now? Does it just work for songs? Yeah.

Speaker 2 Because there's a lot of shit I'd like to forget.

Speaker 2 Like that kind of stuff. Yeah, so I would just, if that works, I want to do that.
Well, the other thing that happened to me yesterday was I found my back has been itching, and then we have this.

Speaker 2 Wait, are you going to get a

Speaker 2 little bit?

Speaker 3 He's backing into a plug for hypochondriacs, other positive.

Speaker 2 So Sean, we're in conversations in the hotel room, and Sean does a lot of this. He's like, uh-huh, what's going on? And he gets up against the door, and he's going, really? Yeah.
Like a dog, like

Speaker 2 a plug, and scratches his.

Speaker 2 Yeah. So then our friend Eli is with us, who we love and is a very good friend of ours.

Speaker 2 And I'm like, what's going on with my back? He goes, take a shirt off. Let me see.
So I took the thing and I'm back.

Speaker 1 And he goes, that's shingles.

Speaker 2 Truly, you have shit. Yeah, he said, I have shingles.
Were you not there? No. No, he no, he went, oh my god, that's shingles.

Speaker 2 And I'm like, what? I just got the vaccine for shingles. How could I have shingles? Right.

Speaker 2 Because if there's a vaccine for it, it's in my body.

Speaker 2 Yeah.

Speaker 2 Yeah. Right?

Speaker 2 Yeah.

Speaker 2 Good health isn't political. No.
Wow. Say that again.
Thank you. No, thank you.

Speaker 3 You can get a show. Who's got a pen?

Speaker 2 Yeah.

Speaker 3 I used a pen earlier to write my intro, and since we're talking about science, it's a great segue. Fellas,

Speaker 3 I wanted to tap into the brain power of this city.

Speaker 2 Okay?

Speaker 3 We got a big brain coming out.

Speaker 2 Uh-oh.

Speaker 3 This fella has a master's and a PhD from Berkeley. He's a fellow at Princeton.
He has tenure at Penn. He arrived here at MIT in 2004 where he still works today.

Speaker 3 He does it all from physics to cosmology to quantum stuff and computers. He's going to explain what it is.

Speaker 2 Stephen Hawking. Will Farrell.
Stephen Hawking.

Speaker 3 Please welcome a guy who can definitely make us all more smart, not less smart. Smart less, you get it? Max Tegmark!

Speaker 2 Oh! What? Max Tegmark! Come on, Max! Oh my gosh! Get out here!

Speaker 2 There he is!

Speaker 2 Max! This is Max Tegmark! How are you, man? It's so nice pleasure to meet you. Come on, please.

Speaker 2 Please. This is so exciting.
Well, see, we have the same stylist.

Speaker 2 Wow,

Speaker 2 he wears it a little better than you do. He certainly does.

Speaker 3 Now, can you do a better job than I just did of explaining what it is that you do?

Speaker 3 First of all, how do you introduce yourself? Call yourself what you do.

Speaker 2 By the way, you look like a rock star. Well, it depends on what I want.
Like if I'm on a long flight and I just want to be left alone, the person asks me what

Speaker 2 you're physics.

Speaker 2 That was my worst subject in high school. Five hours of silence.
Right, right. But you are.

Speaker 2 I want to talk. Yeah.
I'll be, oh, astronomy. And they're like, oh,

Speaker 2 I'm a Virgo. Oh, okay.

Speaker 3 Or maybe if I say cosmology, they'll be talking about eyeliner and makeup.

Speaker 2 Okay, so the class you teach is...

Speaker 2 Oh, it's whatever they want me to torture the students with that year. So it can be either torturing the freshman that came out of high school with

Speaker 2 the basic physics of how stuff moves, to torturing the grad students with some advanced

Speaker 2 topics about our universe. Okay.
Or I torture most of my time I spend actually torturing my grad students doing research on artificial intelligence.

Speaker 2 Okay, okay, good. By the way, this is everything I'm for.
Yeah, will you marry me? No, I'm kidding.

Speaker 2 I want to go. You probably have a lot of stuff.
That's my wife over there.

Speaker 3 Well, that's...

Speaker 3 I saw this documentary on artificial intelligence, and what I was surprised to learn is that it's not about like robots, like the Steven Spielberg movie.

Speaker 3 It's more about the amount of computing speed that we now can do such that,

Speaker 3 like I think they said in this documentary, you can put all the books that have ever been written into a computer now. And you're going to tell me whether I'm right or wrong.

Speaker 3 I bet I'm close to right, but probably wrong.

Speaker 3 You can put all the computers into all the books into a computer and the computer will ingest all that information, separate it out, and be able to give you an answer.

Speaker 3 about anything that you can ask them if the information was in any of those books from languages to rocks to

Speaker 3 I mean isn't that called Google though well

Speaker 3 I'm sure you could explain that but it's like that's artificial intelligence that's

Speaker 2 well yes and no so on one hand yeah you can take all the books that were printed and put them on a memory card so small that you might have a hard time finding it in your pocket but that doesn't mean that a computer necessarily understands what's there just because it can store it and kind of regurgitate it back to you, right?

Speaker 2 And I think the truth is, despite a lot of the hype, that artificial intelligence is is still mostly pretty dumb today compared to humans or even cats and dogs but you know that doesn't mean it's gonna remain that way I think a lot of people make the huge mistake of thinking just because AI is kind of dumb today it's always gonna be that way right well should we shouldn't we keep it dumb because if we let it get too smart etc what is that what is that threshold the point of no return yeah because remember that thing about facebook where they like started to i I don't know if this is true, they started doing like AI technology, they just started talking to each other and they shut it down.

Speaker 2 Is that true? Because they were gossiping. Yeah.

Speaker 2 It's true, but I think Hollywood and media often make us worry about the wrong things.

Speaker 2 What do you mean?

Speaker 2 First of all,

Speaker 2 people often ask me

Speaker 2 if we should fear AI or be excited about it. The answer is obviously both.

Speaker 2 AI is like any technology. It's not evil or good.
If I ask you like, what about fire? How do you feel? Are you for it or against it? What are you going to say? Right.

Speaker 3 It can hurt if you use it incorrectly.

Speaker 2 Exactly. And the same thing with AltTech.

Speaker 3 The only difference is

Speaker 2 that AI is going to be the most powerful technology ever. Because look, why is it that we humans here are the most powerful species on this planet? Fucking A.
Is it because we have

Speaker 2 bigger biceps, sharper teeth than the tigers? No, it's because we're smarter, right?

Speaker 2 So obviously, if we make machines that are way smarter than us, which is obviously possible, and most researchers in the field think it's going to happen in our lifetime, then it's clearly either going to be the best thing ever or the worst thing ever.

Speaker 2 My question is,

Speaker 2 when it's the worst thing ever, by the time it becomes the worst thing ever, then we're fucked. Then it's too late, yeah.
You want to kind of. So let's not let it be the worst thing ever.

Speaker 2 So that's the catch, though, right? We humans, right, have had to play this game over and over again again with where technology got more powerful and we're trying to win this race, making sure the

Speaker 2 wisdom with which you manage the tech keeps pace with the power of the tech. And we always use the same strategy, learn from mistakes.

Speaker 3 But it seems like the big safeguard that we have as humans that we don't yet have with machines is that we have ethics, we have empathy, we have emotion.

Speaker 3 And what is the computer program that you would need to put together to inject that into this new machine with all of this information.

Speaker 2 Can we put some snuggles in it? Yeah. That's a fantastic question.
What's the snuggle recipe? So you're hitting exactly my wish list.

Speaker 2 If you want to have ever more powerful AI that's actually beneficial for humanity, right, so you can be excited rather than horrified about the future.

Speaker 2 There are three things you're going to need. First, you're going to need the AI to understand our human goals.

Speaker 2 And then get it to actually adopt the goals and then to actually keep those goals as it gets smarter. And if you think about it for a little longer, each of these are like really hard.

Speaker 2 Suppose you have a, you tell your future self-driving car to take you to Logan Airport as fast as possible, and then you get there covered in vomit and chased by helicopters.

Speaker 2 You're like, no, no, no, no, no, that's not what I meant. And the car goes like, that's exactly what you asked for.
Right. You know, it clearly lacks

Speaker 2 something. Literally.

Speaker 2 Literally, are you? He's the Terminator. He sounds like the Terminator, and he's talking about Terminator stuff.

Speaker 2 So we humans have so much more background knowledge, right, that a machine doesn't a priority, because it's like a very alien species of a sort. So that's hard for starters.

Speaker 2 And then suppose you can get that right.

Speaker 2 Now,

Speaker 2 let me stop you there.

Speaker 3 Is there any chance of getting that right? In other words, the formula, the equation that equals emotion, responsibility, ethics, can you even create a computer equation for that?

Speaker 2 I think right now we don't know how to do it. It's probably possible.
We were not working enough on it yet. The catch is, though,

Speaker 2 computers are just like, you know, if you think of a baby that's six months old, you're not going to explain the fine details of ethics to them because they can't quite get it yet.

Speaker 2 By the time they're a teenager, they're not going to want to listen to you anymore. Those of you who have kids out there, right? So, but you have a window with human children

Speaker 2 when they're smart enough to get it and maybe still malleable enough to hopefully pay attention. That's fascinating.

Speaker 2 With computers, though, they might blow through that so quickly. That window makes it hard.
Did you see Ex Machina? Did anybody see Ex Machina? That's amazing. That's amazing.
What did you see?

Speaker 2 Now let's move it up for Ex Machina.

Speaker 2 What is the most accurate film to science?

Speaker 3 Like, is it HAL in 2001 or Ex-Machina?

Speaker 2 Those are my top two, actually, because HAL emphasizes this key thing that the thing you should really fear in an advanced AI is not that it's going to turn evil, but that it's going to just turn really competent and not have its goals aligned with yours.

Speaker 2 That's what happens in HAL, right? No spoilers. Right.
And like that taxi I mentioned. 60 years in, we're good.

Speaker 2 But then the other thing you should also worry about is, even if you can solve all these things, and I think it might take 30 years to figure this out, which is where we should start now, not the night before some folks on Too Much Red Bull switch it on, right?

Speaker 3 I got two cans in me right now.

Speaker 2 Not in the can. Not in my can, but in

Speaker 2 the superintelligence away from me.

Speaker 2 But the other thing is, even if you manage to solve those technical problems, which we should have more research on, you also have to worry about human society.

Speaker 2 Because just think for a moment about your least favorite leader on the planet. Don't tell me who it is, so we don't offend anyone in the audience.
Thank you for thinking he's a leader.

Speaker 2 But just imagine their face for a moment here, okay? And then imagine they are the one who controls the first superintelligence and take control over the whole planet and impose their will.

Speaker 2 How does that make you feel? Yeah, great.

Speaker 2 None of it makes me feel great. Listen, after a lifetime of doing all this stuff with the rut, how does it feel to talk to an actual robot? Like, that must feel...

Speaker 2 And we will be right back.

Speaker 1 Now streaming on Paramount Plus, it's the return of Landman, TV's biggest hit from Yellowstone co-creator Taylor Sheridan.

Speaker 1 Academy Award winner Billy Bob Thornton is back as Tommy Norris, facing higher stakes than ever.

Speaker 1 With an all-star cast, including Demi Moore, Andy Garcia, and Sam Elliott, tensions rise as Tommy and Camille Miller fight to control M.Tech's oil.

Speaker 1 When his father returns, Tommy must balance life as both oilman and family man. Don't miss Landman season two, now streaming only on Paramount Plus.

Speaker 1 Do you guys love the holiday season? I love it. What's on my shopping list? I want a new Dop Kit bag.
Is that stupid? I think it's great.

Speaker 1 This holiday season, earn cash back at your favorite stores across travel, dining, home essentials, and more with Racketon.

Speaker 1 Stack cash back on top of holiday sales to maximize savings while stores offer their highest cash back rates of the year.

Speaker 1 Yep, if a store is running a 20% off sale and Racketon offers 15% cash back, you can stack both deals together.

Speaker 1 Plus, with Racketon's weekly big deal reveals, you'll find one great store offering epically high cash back for one day only. Hey, maybe I'll get Jason and Will Adop Kit too.

Speaker 1 Membership is free and it's so easy to sign up. Visit racketon.com, download the app, or install the browser extension.
Join today for a new member welcome bonus after minimum qualifying purchases.

Speaker 1 Terms and conditions apply.

Speaker 4 The family that vacations together stays together. At least, that was the plan.
Except now, the dastardly desk clerk is saying he can't confirm your connecting rooms.

Speaker 2 Wait, what?

Speaker 4 That's right, ma'am. You have rooms 201 and 709.

Speaker 2 No, we cannot be five floors away from our kids.

Speaker 2 The doors have double locks, they'll be fine.

Speaker 4 When you want connecting rooms confirmed before you arrive, it matters where you stay.

Speaker 2 Welcome to Hilton.

Speaker 5 I see your connecting rooms are already confirmed.

Speaker 2 Hilton for this day.

Speaker 1 And now back to the show.

Speaker 3 So is there,

Speaker 3 do people have proprietary right over certain stuff, or does one country control a lot? Like, who's leading? China's leading the AI, are they not?

Speaker 2 The U.S. and China are both very strong.
I mean, most research suggests that it's U.S. is still kind of ahead, but there's a lot of hype around.

Speaker 2 Both countries, of course, try to... If anybody survives the USA, I'm going to lose it.

Speaker 3 And I'll bet when they say the USA, they're talking about MIT and they're probably talking about him.

Speaker 2 Well, you know how it goes.

Speaker 2 Both countries are trying to claim that the both countries, researchers, are trying to claim that the other one is the headset that you get more funding. That's how we researchers always do it.

Speaker 2 But seriously, the interesting key here, I think, is ultimately, it's not really going to matter which country gets it first. It's going to matter most.
Is it going to be

Speaker 2 us who control the machines or they who control us? I mean, but it really is all joking aside. I'm obsessed with the Terminator movies and anything sci-fi.
They're all joking aside. Yeah, joking side.

Speaker 2 Yeah, let's put the jokes aside and let's talk about it.

Speaker 2 No, but that's kind of the idea behind a lot of Hollywood movies and stuff is that

Speaker 2 what if the AI has become more intelligent than the humans? But here's the thing. I saw something on 60 Minutes years ago, which fascinates me to this day.

Speaker 2 And I'm not going to get this right, but it's some guy. Andy Rooney's eyebrows.
That's it. No, so some.
Did you ever wonder why? You ever wonder why?

Speaker 2 I opened my own desk drawer and I got a tie from the 1968 Democratic Convention. It's got soup on it.
I don't like soup.

Speaker 2 Nobody knows who Andy Rooney is at all.

Speaker 2 Yeah.

Speaker 2 It's also his Dak Shepard impression. I know it was true.

Speaker 2 But anyway, so there is this guy, the interviewer was interviewing the scientist who claimed to have come up with this idea.

Speaker 2 This thing was like wrapped around his ear and and it was like tied to his side of his head. And the interviewer was asking him a question, like, what's the population of Utah?

Speaker 2 And all he had to do was think of the answer, and it popped up on the screen. Do you know what I'm talking about?

Speaker 2 Well, this sort of stuff you can only do if

Speaker 2 you have a connection to Google.

Speaker 2 Oh, yeah.

Speaker 3 Ergo, you saw it 15 fucking years ago. What are you talking about?

Speaker 2 No, no. No.
Sure. No, but that you can.
Oh, really? Peter James.

Speaker 2 No, that you can, that if if you think of a response, he got it.

Speaker 2 Okay. He heard you.

Speaker 3 Answered it. Let me move on to another question, something current.
So, you know, we were talking earlier about.

Speaker 3 We're going to kiss it out later.

Speaker 2 What do you mean when you read people's brainwaves?

Speaker 2 Well, I found it fascinating. I saw the segment where the guy was thinking an answer, and it popped up on the screen.
Again, the third time it's really clear. Hey,

Speaker 2 Sean,

Speaker 2 your best robot voice, Go, quick.

Speaker 2 He's going to play. Just think Katie Perry wants.

Speaker 1 Do you want to play a game of war?

Speaker 2 No.

Speaker 1 Do you want to play

Speaker 3 tacto? Another 15-year-old.

Speaker 3 Do you want to play a game?

Speaker 2 That's it.

Speaker 3 He's teeing himself up. This is the voiceover artist here.
Let's have it.

Speaker 2 Do you want to play a game?

Speaker 2 No matter how I say it, it's just the gayest computer ever.

Speaker 2 I can't do that. Do you want to play a game? Like, yeah.

Speaker 2 And

Speaker 3 what's your best computer, voice?

Speaker 2 I'm sorry, Dave.

Speaker 2 I can't do that.

Speaker 2 I just love that health scene again because it points out what you really should worry about. But coming back to the two things.
One, again, to summarize, right?

Speaker 2 We need to make sure the machines can actually

Speaker 2 align their goals with ours.

Speaker 2 Because if you have something much smarter than us that has other goals, we're screwed. It's like playing chess today against the computer that has a goal to win when you have, it's no fun.

Speaker 2 What's the next? Because when somebody says AI, all I picture are those mechanical dogs that walk around. And

Speaker 2 they don't really do anything. They're just like, oh, look, we invented a robot dog and it doesn't really do anything.
So

Speaker 2 I want to know what's the next thing that we can use it.

Speaker 2 Is it mechanical cats?

Speaker 2 Mechanical cats? Yeah, no, you know what I mean? Like what's the next one? Oh imagine mechanical cats doing cats. Oh my god, that would be amazing.

Speaker 2 Sorry, we'll get back.

Speaker 3 You're saying what's the next thing we can look forward to enjoying? Yeah, like

Speaker 2 in the pop sense. Okay, in the pop sense.
So first of all, so just to finish off what we talked about, you know, Hollywood, it makes us associate AI so much with robots and the Boston Dynamics dogs.

Speaker 2 And you should check them dancing, by the way, if you haven't. Dancing dogs, robots.
The dancing Boston Dynamics robots. Super cool.

Speaker 2 But the biggest impact right now AI is having is actually not robots at all. It's just software, right?

Speaker 2 I mentioned this improvethenews.org project we're doing, which is just a little academic thing. But if you think about social media, that's all about AI.

Speaker 2 One of the reasons people hate each other so much more in America now is the effect of AI.

Speaker 2 Not AI that had the goal of making people evil, but just had the goal of making people watch as many ads as possible.

Speaker 2 But the AI was so smart at figuring out how to manipulate people into watching ads that it realized that the best way to do it is to make them really angry and keep showing them more and more extreme stuff until they were so angry they wouldn't fragment.

Speaker 3 And if you get really, really pissed off, you then research that thing even more, and then you get more ads and all that stuff. Boom.

Speaker 2 So that's one.

Speaker 2 All of social media, media. Another one is,

Speaker 2 let's talk about some positive things, because AI, intelligence, right? It's human intelligence that's given us everything we like about civilization.

Speaker 2 So clearly, if we can amplify it with artificial intelligence, we can use it to solve all sorts of problems we're stumped on now, like cancer and lifting everybody out of poverty and so on.

Speaker 2 Will there ever be a

Speaker 2 pure software thing that has nothing to do with bots is use AI for doing better science,

Speaker 2 better medical research, for example?

Speaker 2 So is there any, like I read a long time ago about like a, you know, like you put a locator chip chip in your dog or your cat whatever can you I heard that they might be making a like a chip that has all your medical files and you put it under your skin so you just scan it because filling out all those fucking forms over and over it's like I just I just filled out the form and now you're asking me the questions all over again.

Speaker 2 Read the thing I just

Speaker 3 do.

Speaker 2 I'm dying.

Speaker 2 Right, I'm like, what? And it's like, what's your name? I just filled three forms out that says what my name is.

Speaker 2 I think I personally get a pass on that ship implant that just asked the hospital to have a less Stone Age computer system.

Speaker 2 But seriously, of course. Did you care about Sean showing up to doctor's appointments late so that he doesn't have to wait?

Speaker 2 It's riveting. I don't know if that's AI or whatever.
Something huge that happened this year, for example, is biologists have spent 50 years trying to figure out just from the DNA

Speaker 2 how the shape is going to turn out to be of different proteins that our bodies make. It's called a protein folding problem.
And then Google DeepMind solved it. No way.
Yeah, with an AI.

Speaker 2 And now you can develop medicines faster. So this is a fantastic example, I think, of AI for good.
Another one,

Speaker 2 but then the robots that are having, probably going to have the biggest effect, I think, on us and the job market the next five years are probably cars, actually, just autonomous vehicles.

Speaker 2 That's pretty cool.

Speaker 2 I'm worried about everybody with their cars and the automatic driving or whatever, and then they show up, and then people are just going to show up at the valet, like dead in their car.

Speaker 2 It's going to be really, you know what I mean? Like people are going to get in their car and then they're going to be like, the pizza guy shows up and he's like slumped over.

Speaker 2 Oh, fuck. They'll open the door.
Yeah. Just out.

Speaker 2 You know what I mean? Like that's, that's what I'm worried about.

Speaker 3 Now,

Speaker 3 with the combination of

Speaker 3 what you know about computers, what you know about space, what you know about intelligence.

Speaker 3 I know.

Speaker 2 I know what you want to know, honey.

Speaker 2 John wants to know.

Speaker 3 He wants to know if there's aliens, but we're not going to ask.

Speaker 2 No, not that.

Speaker 2 No, what I can do. I can say something about aliens.

Speaker 3 What I want to know from you is, is it, based on your knowledge of all those areas, does it seem possible to you that there is the requisite amount of intelligence and technology at a place other than Earth?

Speaker 2 Ooh. See, he asks a different way than I would.

Speaker 2 Well, of course it's possible. Although, you know, my guess, based on spending a lot of years dealing with telescopes and thinking about these things is that

Speaker 2 when we for the first time get

Speaker 2 way more advanced technological intelligence showing up on this planet,

Speaker 2 it's probably going to be in our lifetime and it's probably not going to come from outer space. It's going to be something we built.
What do you mean? What do you mean? So you're saying maybe

Speaker 2 what do you mean that we were building something to bring them here? What are you talking about?

Speaker 2 No, I mean we're basically if the goal of artificial intelligence has always been to build stuff that's way more effective. And they just don't have cars.
They need to get here.

Speaker 2 So you're basically building. And they're all going to be dead when they get here.

Speaker 2 No way, keep it. But we really, I mean, if you basically build a new species, a new life form that's way, way smarter than us, right? Yeah.
That's alien. That's alien.
It's incredibly alien.

Speaker 2 It's much more different from a chipmunk or a tiger, and that it really has nothing in common with our evolutionary history. Nothing.
It doesn't necessarily care even about food or reproducing itself.

Speaker 2 So

Speaker 2 if we do that, it's going to be just as big an event on Earth as if aliens show up. And that's why I'm kind of weirded out that people talk so little bit of that.
That's what I'm saying.

Speaker 2 That's what I'm saying. As despicable as

Speaker 2 human beings, we're all, including me, everybody, we're just so despicable that if the announcement came that, like, oh my god, there are aliens from there, they're visiting our planet, people be like, oh, okay, I got to check my Instagram.

Speaker 2 Like, I don't think people would be like, giving it away.

Speaker 3 Based on that, it seems to me that what you would want to do is make sure that somehow built in all this stuff is some kind of a kill switch and that the and that the that the wise men who are who are

Speaker 3 why you know what i mean but like a group right probably made up of you and and your other colleagues male or female from around the world that are the leading scientists in this area would get together on some encrypted platform and say let's make sure we only us five know about this one thing that we could press to shut all these AI robots down that we've created on our own.

Speaker 2 I won't tell anybody. I won't tell you.

Speaker 2 All four kill switches, although this is

Speaker 2 what I'd do. I'd say

Speaker 2 this sounds a little too elitist for me because the idea that somehow

Speaker 2 a bunch of the dudes who

Speaker 2 know a lot about AI should decide humanity's future.

Speaker 3 That's kind of how I want it in your hands.

Speaker 2 But that's how it is now.

Speaker 2 If the rest of us, everybody doesn't get engaged in these things, who's going to make all these decisions? It's going to be probably a bunch of dudes

Speaker 2 in some back room

Speaker 2 who have not been elected.

Speaker 2 Or people who are super AI nerds, like specialists in human happiness.

Speaker 3 Only they would know the ramifications of something getting in the wrong hands.

Speaker 2 But are they the ones who should be deciding what makes sense?

Speaker 3 Not some elected weirdo. Yeah.

Speaker 2 So I look what happens.

Speaker 2 I don't trust particularly elected people, but I also don't trust tech nerds with being experts in psychology and whatnot.

Speaker 3 Who can we trust? Then we've got to shut it off.

Speaker 2 Yes, everybody.

Speaker 2 We shouldn't trust it to the nerds.

Speaker 2 What are you talking about? What should we all talk about?

Speaker 2 Let me ask you guys,

Speaker 2 suppose you have a magic wand that you can wave, okay, and create this future 35 years from now, okay, when there is this very advanced AI, and you get to decide how the planet is organized,

Speaker 2 what it's used for, and what it's not used for. So it's not going to be your standard dystopian Hollywood flick.
What is this future like? What do you want it to be like? Well,

Speaker 2 that makes me want to take a nap.

Speaker 3 Well, you would.

Speaker 3 So what do you want it to be like? I would want all the technological advances that we have to go to the bettering of the living experience, which means health and kindness and all that stuff. Yes.

Speaker 3 You know, you point it all in that direction, and then good decisions come from that? I don't know. Yes.

Speaker 2 Brave New World.

Speaker 2 Well, that sounds great.

Speaker 2 So it's a

Speaker 2 expected response.

Speaker 2 Let's compare it with what we're mostly spending AI money on now, right?

Speaker 2 So a massive amount of money is spent on advertising, which ends up making teenage girls anorexic.

Speaker 2 And then we have a massive amount of, enormous amount of money now building robots to kill people for the first time. And they were used in Libya last year.

Speaker 2 They hunted down these fleeing people and just killed them because the robots decided that they were probably bad guys. And I think...
I did not know about that.

Speaker 2 It's a... I don't know.
Did you guys all know about that? About the robots that hunted the people down. Yeah, let's not gloss over that.
What happened?

Speaker 2 This is the shit that I'm talking about now.

Speaker 2 So

Speaker 2 it's actually, it has some dark comedic value, I think.

Speaker 2 Yeah, it sounds hilarious.

Speaker 2 I mean,

Speaker 2 the current policy, actually, of the U.S. government on killer robots and slaughter bots is

Speaker 2 three things. First of all, the U.S.
says, you know, these are the murder hornets.

Speaker 2 First of all, we're saying this is nasty stuff that we don't ever want to delegate kill decisions to machines, so we're not going to do it.

Speaker 2 Second, it's going to have a decisive impact on the battlefield. And third, we're going to reserve the right for all other countries to build them and export them to whoever they want.

Speaker 2 So, like, this was this Turkish company decided to sell them to Libya against the arms embargo, and that's why they hunted these people down.

Speaker 2 We went, like, in a really short span, we went from

Speaker 2 replicating

Speaker 2 sheep to slaughter bots like it seems like that happened really quickly man and and

Speaker 3 so is there we we we we we had a we were talking to a guest earlier today about time travel

Speaker 3 and now

Speaker 3 so it's on it's on our minds now

Speaker 2 I'm and we do have to ask you like we did before if you're going to did you time travel here

Speaker 2 if

Speaker 3 You did, is there any chance?

Speaker 3 And I won't bore you with the same question that I asked that astronomer that we had about the mirrors.

Speaker 2 Neil deGrasse Tyson, yeah. It was a real highlight.

Speaker 2 But

Speaker 3 in the

Speaker 2 middle of the day, I'm going to try here. So

Speaker 2 Will said, Neil deGrasse Tyson was on, and Jason asked a really long question about time travel.

Speaker 2 And Will said, Hey, do you think we could put enough mirrors and go travel back in time to the beginning of Jason's question?

Speaker 3 And it's a valid.

Speaker 2 It's hard for me.

Speaker 2 It's a valid. It's a valid.

Speaker 3 Okay, so the sun, the light we get from the sun has been traveling seven minutes. Eight minutes.
Eight minutes. Okay.
So we're basically feeling something that's eight minutes old.

Speaker 2 Seeing something that's eight minutes old.

Speaker 2 Right?

Speaker 2 Okay.

Speaker 3 So isn't there a way to have a mirror

Speaker 2 that creates anyway? Yes, actually, there is. Well, hang on.
There is. He says, yes, there is.
He likes my thinking. thinking.
Nature already built one for us, actually.

Speaker 2 In the middle of our galaxy, there's this monster black hole that weighs about four billion times as much as the Sun. And it's black.
Oh, that's mine.

Speaker 2 No, it's a black hole. And if you look at it really carefully,

Speaker 2 light that went from you actually was bent by its gravity so much that it comes back on the other side of the black hole. Okay.
Like a mirror.

Speaker 2 So if you look with a really good telescope, in principle, you could see your own reflection, except

Speaker 2 no long ago that you weren't born yet. Are you serious? Okay, wait, so talk about it.

Speaker 2 And now, a word from our sponsor.

Speaker 1 Looking for a running shoe that does it all? The New Balance 1080 is your ultimate go-to, blending comfort, performance, and undeniable style.

Speaker 1 Whether you're clocking miles or grabbing coffee, it seamlessly transitions from your morning run to your everyday life.

Speaker 1 With plush cushion support, your feet stay secure and comfortable run after run. And thanks to lightweight, breathable materials, you'll stay cool and fresh no matter how far you go.

Speaker 1 From race day to rest day, the New Balance 1080 delivers the versatility and comfort serious athletes and everyday movers demand.

Speaker 1 Slip them on and experience what effortless performance really feels like. I got myself a pair of 1080s right before I came to London, and boy, oh boy, did I need them.
They are so comfortable.

Speaker 1 The soles are thick and it's super soft and plushy, and it makes walking everywhere such a pleasure. I love it.
Shop the 1080 at newbalance.com.

Speaker 1 Having the United Airlines app is like having your own pocket-sized personal assistant at the airport.

Speaker 1 Get real-time flight updates like your gate number and a live countdown to boarding, even if your home screen's locked.

Speaker 1 Stride over to your gate with gazelle-like grace, thanks to door-to-gate directions from your personalized airport map.

Speaker 1 Once you fly with the United app, you'll never fly without it unless you don't want to save about 30 minutes at the airport. Get it before your next trip at united.com slash app.

Speaker 1 With Sylvania, seeing better while driving at night starts with you. Because headlight bulbs dim over time and can lose up to 50 feet of visibility before burnout.
That's why you shouldn't wait.

Speaker 1 Upgrade your drive with brighter lights for better visibility on the road ahead.

Speaker 1 Sylvania's step-by-step installation guides make it easier than ever to take control of your nighttime clarity all without a trip to the mechanic.

Speaker 1 So before a burnout darkens your day, upgrade till Sylvania and see better tonight.

Speaker 2 And now back to the show.

Speaker 3 Wait, so talk, go. So that's basically time travel.
In other words, these telescopes, like the one we just launched off,

Speaker 3 they're looking so far back, they might actually see the Big Bang at some point or something. We're seeing the galaxy at an earlier stage than us.

Speaker 3 So eventually, if you get a telescope strong enough, you could see the start of Earth. potentially so that it came all came out.

Speaker 2 I should tell you, we all took mushrooms before. Oh, it's just far away.
Yeah, and we were.

Speaker 2 Somewhere in there, is there an answer?

Speaker 2 Some of us took a double.

Speaker 2 The possibility of time travel. Yeah, yeah, so this is a kind of see but not touch time travel.
The sky is a time machine, just like that. You know, you see me.

Speaker 2 So time awareness, not time travel. Yeah, three nanoseconds ago, you see the sun eight minutes ago, you see stars at night if it's clear

Speaker 2 so long ago that people over there looking at us would see maybe the Boston Tea Party, and we can see things that happened

Speaker 2 over 13 billion years ago.

Speaker 2 But you can also travel forward in time for real.

Speaker 2 No bullshit, real time travel. You go to this black hole here.
That's my job.

Speaker 2 I was was told actually when I moved to the US that in America, possession is 95% of the law. I think.
Is that true? You know what? Now it's yours.

Speaker 2 So anyway, if you just orbit around this black hole really close,

Speaker 2 which you can actually do, I give this as a homework problem to my MIT students, then

Speaker 2 your time will actually slow down so much that if you're on Skype with you,

Speaker 2 you'll be hearing him go like, hello,

Speaker 2 I'm here.

Speaker 2 And then you're going to hear him say, Oh my god, I'm so worried about what's going on in that area. Well, that's accurate.
Because

Speaker 2 your times are actually running at different rates.

Speaker 2 And then when you come back, he's going to be like, You look so good, so youthful, because you're actually younger than

Speaker 2 you would have been otherwise.

Speaker 3 So, are we going to be alive when we start to see any of this stuff that's going to really blow our minds?

Speaker 2 Yeah, because, and Mars, are we living on Mars? Go. So, this all, this is the upside of artificial intelligence.

Speaker 2 That's not a follow-up.

Speaker 3 It's a different subject.

Speaker 2 By the way, you mentioned before Boston Tea Party, too soon. This is the upside of artificial intelligence.
It's just for us too soon.

Speaker 2 This is the one. This is the upside of artificial intelligence.

Speaker 2 Either we can use it to go extinct in our lifetime or we can use it to bring all these awesome things about in our lifetime.

Speaker 2 We used to think, oh, this is going to take 10,000 years to get like the sci-fi novels, because we humans have to figure out all the tech ourselves. No.

Speaker 2 If we can build this incredibly advanced AI that can then build more advanced AI, et cetera, et cetera, we might be able to build this tech, you know, 30, 40 years from now. Right.

Speaker 2 And suddenly, we're not limited by our own pace of developing tech. We just go, boom, and we're limited by the laws of physics.

Speaker 3 Well, and by the laws of ethics. Like, if just because we can, should we? Like, how do we know when we as a society are mature enough to handle some of the technology that we can access?

Speaker 2 I think going and having some fun, joyriding around black holes is ethically okay.

Speaker 2 It's my inner nerd speaking here. As long as you don't force other people to go with you.

Speaker 2 But on the other hand...

Speaker 3 That's part of the...

Speaker 2 Don't be a nerd. We're just going to the black hole.

Speaker 2 Like, we're not forcing, but it is peer pressure.

Speaker 3 That's one of the questions that they asked in Jurassic Park was just could, just because... Oh, good, Jurassic Park, good.

Speaker 2 Yes, because...

Speaker 2 Keep up the science.

Speaker 3 Just because you can create this island with these dinosaurs, should you? Now,

Speaker 3 based on the science of that, the amber that was frozen in there with the DNA of the dinosaurs, is that real? Is that real?

Speaker 2 Well, now, you know, we have my friend George Church down at Harvard here.

Speaker 2 He's talking about already bringing back the mammoth by just taking the DNA together, assembling it, error correcting, executor, then basically DNA printing out the mammoth DNA and boom, mammoth.

Speaker 2 We can do a lot of these things. Leaving,

Speaker 2 we can come back to the ethical questions. I'm stopping him, him, exactly.

Speaker 3 What's his name again?

Speaker 2 George Church. What's his name on the show?

Speaker 3 What's the council that's going to say yes or no?

Speaker 2 Let me just say a bigger thing first, though, just to get the controversies out of the way, you know. I think we humans,

Speaker 2 to really get the ethical decisions right and let people forfeit some dumb stuff, we have to remember how much upside there is also.

Speaker 2 We're living on this little spinning ball in space with almost 8 billion people on it, and we've been spending so many years killing each other over a little bit more sand here and a little bit more forest there.

Speaker 2 We're in this huge universe

Speaker 2 we thought was off limits. Well, with AI, it could be on limits again.

Speaker 2 We could go to the Alpha Centauri system, and in a lifetime, we could have a future where life is flourishing in our galaxy and in our other galaxies,

Speaker 2 where there's just such an amazing abundance that people are going to be wondering: like,

Speaker 2 why did these guys futz around for so many years in this little planet and fight, squabble about breadcrumbs instead of going out?

Speaker 2 Most of this universe, despite all the Star Trek episodes out there, no offense,

Speaker 2 so far really don't seem to have woken up and come alive in any major way. And I feel that we humans have a moral responsibility to see if we can help life,

Speaker 2 our universe wake up some more and seem to help life spread.

Speaker 3 But what if we run out of time with our use on this planet because of environment where we don't get to... I'm reading your mind.
Yeah, we're getting to Mars. So

Speaker 3 can we point the AI to our challenges here regarding the environment, fix that real quick, and then we can explore everywhere else? That's right.

Speaker 2 I think we need to fix things here in parallel.

Speaker 2 You know, the problem, the reason that the rainforest is partly gone, the reason we're messing up our climate and so many other things, isn't because we didn't know 10 years ago what to do about it.

Speaker 2 It's because we've kind of already built another kind of AI, these very powerful systems, corporations, governments, etc., that have goals that aren't so aligned with the rainforest, and maybe the overall goals of if we can use the AI to tell them how they can make more profit doing things that don't kill the earth, then they'll stop chopping down the forest.

Speaker 2 Maybe

Speaker 2 we should take a bigger step back. You know,

Speaker 2 the whole point of having, I mean.

Speaker 3 You've got stock in Exxon, don't you?

Speaker 3 He's not comfortable answering this.

Speaker 2 My undergrad was in economics, so I'm very sympathetic to the free market, you should doing things more efficient. But

Speaker 2 the whole point of the free market is that

Speaker 2 you should get done efficiently things that you want to get done. And then you should have, of course, some guidelines.

Speaker 2 That's why we decided to ban child labor in the US. That's why we decided to invent the weekend.
So, you know.

Speaker 3 Wait, you don't like the weekend?

Speaker 2 I think you're a change question. But

Speaker 2 right now,

Speaker 2 we have, if you create something,

Speaker 2 whether it be

Speaker 2 a super powerful dictatorship or it beats

Speaker 2 a tobacco company that tries to convince you that cancer, that smoking isn't dangerous or whatever, it has its own goals. And it's going to act.

Speaker 2 It's good to think of these things a little bit like an AI, even though it's not made out of robots. It's made of people.
Because there's no person...

Speaker 2 in a tobacco company that can single-handedly change its goals, right? If the CEO decides to stop selling cigarettes, he's just going to get fired, right? So

Speaker 2 we should start thinking, we should think about how do we just align the incentives of all the companies. I want to keep private companies.

Speaker 2 Incentives of people, incentives of companies, incentives of politicians with the incentives of humanity itself to get what you were asking for, you know, a society in the future where we have to be able to do that.

Speaker 2 The change is more cultural rather than scientific, if you will. Yeah, although you do need to geek out a lot about the whole business with incentives.

Speaker 2 Like, why did we invent the legal system in the U.S.? Well, because

Speaker 2 we realize it's not so smart that people always kill each each other every time they get into a squabble about a hot dog, right? Right, you create consequences.

Speaker 2 So you change consequences and now they'll be twice and they'll

Speaker 2 just punch each other instead or

Speaker 2 settle it in some other way.

Speaker 2 Alignment is is kind of the big slogan a lot of us nerds have for this. You want to align the incentives not just the machines, but also of organizations with what's actually good for humanity.

Speaker 2 And we're in this unfortunate situation now where whenever an entity gets too powerful, it doesn't have to be a machine or a dictator, it could even be a company, that they start to now like

Speaker 2 take over whoever was supposed to regulate them

Speaker 2 and turn them into like a rubber stamp. Now they're suddenly not going to be so aligned with what's good for America anymore or good for humanity.
And this problem, we cannot wait for AI to solve it.

Speaker 2 We have to start solving that in the meantime.

Speaker 3 Amen. But, you know, I mean,

Speaker 3 that's been the modus operandi up till now, and we are running out of time and people are taking their profits because they figure they're going to be dead before the ramifications of it really lie.

Speaker 3 So I think the computers have to help us out.

Speaker 2 Yes, yes, absolutely. So this is why I'm so into this AI empowerment thing, why I want to think about how can we use AI and put it into the hands of people so that they can very easily

Speaker 2 catch

Speaker 2 other powerful entities that are trying to screw them over. And

Speaker 2 it's a way of using technology to strengthen democracy. Are Are we going to live on the moon at all?

Speaker 2 You want to? Yeah. Is that possible? Are we planning on that? If you want to.
Can you believe that? Do you want to? Yeah, I would. I would totally live there.

Speaker 2 All of these things are certainly possible. I'm very much of the opinion that

Speaker 2 it's easier to make a really comfortable and pleasant life on this planet. So I would also like to make sure we don't ruin it.

Speaker 2 Is there one project that you're working on right now, just one that you feel extremely passionate about right now that you could share with us?

Speaker 2 It's actually improvethenews.org, this thing I mentioned earlier.

Speaker 2 It's called improve news.org. Improvethenews.org.

Speaker 2 It's just this free little news aggregator, but it's all powered by machine learning, so that's why it can actually read 5,000 news articles every day, which I can.

Speaker 2 And then what we're doing is

Speaker 2 instead of just saying, okay, today we have a lot of news sites. You can go there and read about all the good things that Democrats have done and all the bad things Republicans have done.

Speaker 2 And then there are other ones where you can read about all the great things that Republicans have done and how the bad things Democrats have done.

Speaker 2 For this one, the AI figures out which articles are about the same story.

Speaker 2 Maybe it finds now

Speaker 2 62 things about the new U.S. national debt passing 30 trillion or whatever.
And then it's like, okay.

Speaker 2 Then you can come in and say, okay, here are the facts that all the articles agree on. Boom, boom, boom.
If you're a fact kind of guy, you can now click away and go to the next story.

Speaker 2 But if you want to know also all the narratives, it separates out, oh, here is what, this narrative, that narrative.

Speaker 2 it have photos? Like, would it have a photo of Will last January 6th?

Speaker 2 You mean in the capital on that? He got a little more, because I think it's. No, wait, with the goggles and the thing, there's no way you're getting that photo.
How much were you paying me again?

Speaker 2 Yeah. What's the best Wordle score? Go.

Speaker 2 And be careful.

Speaker 3 He got an unbeatable score today, got it in two, this guy.

Speaker 2 Not bad. Yeah.

Speaker 2 Not bad. That sounds amazing.

Speaker 2 But it's so exciting. Also, just all the emails you get from people, because I think,

Speaker 2 again,

Speaker 2 bits are free. You can give them away to the world.
And AI sounds fancy, but it's just code. Yeah, yeah, yeah.

Speaker 3 Well, listen, I want you with a fresh mind tomorrow when you get back at it, so I don't want you to stay up later anymore today.

Speaker 3 Thank you for joining us. Do you guys feel a little smarter? Yes, I am.
A little bit smarter. Amazing.
I feel smarter.

Speaker 2 I definitely do. Please say thank you to Max.
Craig Mark. Thank you.

Speaker 3 Thank you, Max. Thank you very, very much, Byron.
Here, pal.

Speaker 2 Thank you very much. Thank you.
Pal! Max!

Speaker 2 Thank you, Max!

Speaker 2 Now, here's the thing.

Speaker 2 Here's the thing.

Speaker 3 Go ahead. How much dumber do you think you are than him? Like, on an IQ score, what do you think his score is versus yours?

Speaker 2 He reminded me how much smarter I am than you, which was great. That's fair.

Speaker 2 Actually, I feel very buoyed by that whole experience.

Speaker 3 Do you think it's double his intelligence? Double yours?

Speaker 2 Over mine? His over mine? Over mine. Oh, easily.

Speaker 2 No, no, no, no. I mean, he's just,

Speaker 2 he has a very big brain.

Speaker 2 I could talk to him for hours. I don't know that he would listen to me for more than five minutes, but I could talk to him for hours.

Speaker 2 I love all of the fantastic guests.

Speaker 2 Like right up my alley. Right up my alley.

Speaker 2 I think that if we all spent more time thinking about that kind of stuff just a little bit, that maybe we could get around to solving some big issues tonight. Let's solve it tonight, guys.

Speaker 2 Everybody, huddle up.

Speaker 2 Yeah, that was so cool. Thank you.
I know you,

Speaker 2 we all kind of share, we love all of our guests. Those are nice pops because there's stuff that we don't usually cover on the podcast.
And I'm just repeating myself, but I just love that stuff.

Speaker 2 I could ask them a million more questions.

Speaker 3 Well, it's the original conceit of this thing, Ergo the Title. We thought we'd

Speaker 3 bring people on that that can educate us a little bit more on things that we don't know about. We happened to get lazy and ask some of our famous fancy friends to come on.

Speaker 3 This is a real treat to be able to access these

Speaker 3 big, big thinkers in this incredible town.

Speaker 3 So, Matt, thank you for that.

Speaker 2 And now,

Speaker 2 it's incumbent upon us to really kind of do something about it. We can't sit around all day in our pajamas, you know, and in our slippers, you know what I mean?

Speaker 2 Or then just go to to the golf course and then get in our Teslas. We have to.

Speaker 2 Right?

Speaker 2 Don't you think that's important for us? And I don't want to single anybody out. I will never be one of those people.

Speaker 3 They're ruining things.

Speaker 2 But no, it is true that thank you for, you've educated us a little bit more. It's pretty bad.
And I think, you know, I could talk to him.

Speaker 2 I wanted to talk to him about the Webb telescope because, you know, those kinds of things. Oh, God, here comes a bye, everybody.
Can't you feel it? We're going to start to ramp up the engine.

Speaker 2 Sean, if you don't land it, you can't do it. No, you don't.
You just got to get into it more subtly. Well, I'm just saying.

Speaker 2 I'm just saying, like, a telescope like that is much better than any, you know, thing like this. What are these called? Oh, those are.
Are you kidding me?

Speaker 2 Boy, boy, boys.

Speaker 2 Thank you, Boston. Thank you so much.

Speaker 2 Smart

Speaker 2 less.

Speaker 2 Smart

Speaker 2 Less.

Speaker 2 Smartless is 100% organic and artisanally handcrafted by Michael Grant Terry, Rob Armcharf, and Bennett Barbico.

Speaker 2 Smart Less

Speaker 5 There are millions of podcasts out there, and you've chosen this one. Whether you're a regular or just here on a whim, it's what you have chosen to listen to.

Speaker 5 With Yoto, your your kids can have the same choice. Yoto is a screen-free, ad-free audio player.
With hundreds of Yoto cards, there are stories, music, and podcasts like this one, but for kids.

Speaker 5 Just slot a card into the player and let the adventure begin. Check out YotoPlay.com.

Speaker 1 You know those moments when you're trying to work through a complex problem and you can't stop until you've found the answer?

Speaker 1 That's where Claude comes in, the AI for minds that don't stop at good enough.

Speaker 1 Whether you're planning something big, researching a topic you're curious about, or just trying to work through a problem, Claude matches your level of curiosity.

Speaker 1 Try Claude for free at claude.ai/slash smartless and see why the world's best problem solvers choose Claude as their thinking partner.