Is the Ai Hype Over? Ft. Primeagen | Lemonade Stand π
We launched a Patreon! - https://www.patreon.com/lemonadestand for bonus episodes, discord access, a book club, and many more ways to interact with the show!
Episode: 34
Recorded on: 10/20/25
Clips Channel: https://www.youtube.com/channel/UCurXaZAZPKtl8EgH1ymuZgg
Follow us
TikTok - https://www.tiktok.com/@thelemonadecast
Instagram - https://www.instagram.com/thelemonadecast/
Twitter - https://x.com/LemonadeCast
The C-suite
Aiden - https://x.com/aidencalvin
Atrioc - https://x.com/Atrioc
DougDoug - https://x.com/DougDougFood
Edited by Aedish - https://x.com/aedishedits
Produced by Perry - https://x.com/perry_jh
Segments
0:00 Gavin Newsom
5:00 Who is Primeagen
11:00 AI Poison Pill
23:00 Combating Slop
31:00 How Useful is It?
44:00 AI in Medicine
51:30 When is AGI?
56:00 Profit for AI Companies
1:12:00 SB 53 and Regulations
1:35:00 Energy Requirements
1:41:00 Water Consumption
1:45:00 Future of Programming
New takes on Business, Tech, and Politics. Squeezed fresh every Wednesday.
#lemonadestand #dougdoug #atrioc #aiden
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Speaker 1 Ladies and gentlemen, welcome back to Lemonade Stand.
Speaker 1 This week we have a lovely guest, my good friend Primogen, who is here in the studio, who is not only a former engineer at Netflix and now I would say one of the most, maybe the most influential programming content creator, but most importantly, has a beautiful wife and four wonderful children named John, Mikey, Caroline, and Cletus.
Speaker 1 Yes. Are those correct? I did just.
Speaker 2
Yes, no, that was a very lovely. Oh, wow.
Okay, yeah. Cletus is my favorite.
Like, my daughter Cletus is my favorite one.
Speaker 1 That makes sense. So there's a ton of interesting tech stuff to go over.
Speaker 1 And since we had the opportunity to talk with Prime, I felt like this would be great and a really widespread of interesting things in the tech world, including some drama with meta, which is exciting.
Speaker 1 But before we get into that, something that bonded Prime and I together this last weekend is that this is dead serious.
Speaker 1 We got to hang out with Governor Newsom in San Diego at TwitchCon in a private meeting.
Speaker 4 It's insane at this point.
Speaker 1 Everyone's going to look at it. And so actually,
Speaker 1 a larger percentage of people that aren't you on this podcast keep meeting Newsom.
Speaker 3 This is admittedly a a problem I didn't think we were going to have when we started this show. We've been spending too much time.
Speaker 1 I know.
Speaker 4 Newsom's like too in the weeds with us now.
Speaker 1 Yeah.
Speaker 2 I don't even do politics at all, and I'm even out there hanging out with him. Yeah.
Speaker 1 He literally is like, I don't really know what's going on, but sure, I'll hang out with him.
Speaker 3 Every other comment on every episode now is, oh, Aiden's back? When are we getting Gavin, the fixed co-host of the show?
Speaker 1 So I actually, because unironically, we've all gotten a chance to meet him. I know you really want to.
Speaker 1 So after everybody left the room, there's like 25 people, I realized Gavin Newsom had left his coffee cup. So I brought it all the way from San Diego just for you to touch it.
Speaker 1 Because this is the closest you'll ever get, Aiden.
Speaker 1 Wow. How's it feel?
Speaker 1 How does that feel?
Speaker 1 No, don't.
Speaker 1 That powered so many political decisions.
Speaker 4 Wow. This was a crazy thing that you guys were at a Twitch meet and greet or whatever, not a meet and greet, but like
Speaker 4 a dinner with Gavin Newsom.
Speaker 1
It wasn't a dinner. It wasn't a dinner.
It wasn't a dinner. It wasn't a
Speaker 2 very obviously small small and romantic.
Speaker 1
Just friends hanging out. It's not just friends, just friends being together.
Can't friends spend time together?
Speaker 1 You can't just enjoy each other's company now without naming names, the people you were telling me.
Speaker 4 It's an eclectic mix of people, including some streamers who I think would never be there.
Speaker 1 And you said they were scrolling on their phone.
Speaker 1 Yes, there was a certain streamer. I feel free to, I guess, guess in the comments.
Speaker 1 So, okay, the context of this is some of the guys who we've worked with on things so far with political stuff or guests, they were organizing Newsome, meeting a bunch of streamers at TwitchCon because he wants to learn about gamers.
Speaker 1
This is, to my understanding, the same reason he interviewed you and asked you about Japan. We went on a Fortnite Friday.
We went on Fortnite Friday.
Speaker 2
By the way, he didn't explain it this way to me. He was just like, yeah, Gavin's going to meet about tech and AI and we're going to talk about it.
You want to talk about that?
Speaker 2 I'm like, yeah, I mean, always interested in someone making rules to talk about tech and AI.
Speaker 1 Yeah, it wasn't really. I was way out of place.
Speaker 1 Also, every other person there lives in California except you.
Speaker 2 Like, yeah, from South Dakota. We were very friendly towards you guys.
Speaker 3 Gavin, when will you be taking over my state?
Speaker 3 Like, like everybody's telling me right now. Yeah.
Speaker 1
So they basically got a bunch of streamers together and I helped kind of bring some of them in, including Mr. Prime here.
And it was basically him just being, so what is Twitch?
Speaker 1 Like, why do you guys like this? And then it followed up with because none of the people there wanted to talk about Fortnite.
Speaker 1 Everybody was just talking about the online space and how intense it's become and how bipartisan and people don't feel safe.
Speaker 1 So like within 20 seconds kind of turned into people airing their grievances about the general
Speaker 1 concerns about online communities.
Speaker 4 Sure, need to find someone that'll tell them what Twitch is around the world.
Speaker 1 Unironically, yeah.
Speaker 2 I mean, to be fair, like if you're a politician, you get in front of somebody, you can't ask a question because they're going to be like, yes, well, here's my grievances, right?
Speaker 1
Like, that's exactly what I'm saying. I've been doing that.
That's what everyone's been doing.
Speaker 3 He's so desperate to just
Speaker 3 become a gamer. And no Twitch streamer will allow.
Speaker 1 His opening question was, so I just want to hear from you guys, why is this important? Come into TwitchCon, TwitchCon, this community, this job. Like, what's it mean to you guys?
Speaker 1 And first person to my left, well, it just feels like recently the online discourse has become so intense, particularly with the right, that it's just harder to have discussion. And he could not.
Speaker 1 He could not get a single word about gaming.
Speaker 2 He does not know what Twitch still is.
Speaker 1 At this point, he's actually actively confused.
Speaker 1 No.
Speaker 4 Poor Gavin, bro. He doesn't know what Discord is.
Speaker 1
He's just trying to ask. He did ask what Discord is.
Just Google it. Gavin.
Speaker 4 He did.
Speaker 1 Just Google it.
Speaker 1
So at one point he was like, Discord. So what's that? You guys hanging out there? You said that? That was his question.
And then we told him that it was like, it's just a forum people talk.
Speaker 4 Discord, what's that?
Speaker 1
You guys hanging out there? Yeah. It was like, so what are people doing on Discord? Yeah.
Crazy. I was just trying to understand the...
What are people doing on WhatsApp?
Speaker 1 What's Roblox?
Speaker 2 To be fair, at the very end of the meeting, the only non-Californian he did point to me and said, you're my favorite. I'm just saying there's something to be said about that.
Speaker 4 You're doing your voters, Gavin. What are you doing? You're throwing them away.
Speaker 1 Well, the important thing about TwitchCon is to reach Montana, South Dakotan voters.
Speaker 1 So, okay, there's a couple of things that he mentioned in this conversation that was relevant. So when AI came up, several people were.
Speaker 4 Can I do like a high level? Can we get a better introduction of what you're working on?
Speaker 1 I already said his kid's name. You said he's talking about him.
Speaker 4 I know who Cletus is, but I don't know enough.
Speaker 4 I want to know more about Prime and the content you do and a little bit of your journey from Netflix to here because I think there's people that won't know your background in our audience.
Speaker 1 I think it's good. Okay, yeah.
Speaker 2 So
Speaker 2 seven years ago, eight years ago, something like that, Extra Life was somehow going through the Netflix things. And I had some co-workers come up and say, hey, let's stream.
Speaker 2
We're going to do a 24-7 stream. I didn't know what that was.
So, of course, that meant I played Fortnite. So I was a Fortnite streamer to begin with.
You know, classic.
Speaker 2 Obviously, dyed my hair blue, did the whole nine yards, and then, you know, did the 24-7 stream. And I was like, this is actually fun.
Speaker 1 Like, this is hot while you're at Netflix, though. Yeah, this is what I'm doing.
Speaker 2
We did it inside the Netflix building, did a whole 24-hour event. It was a lot of fun.
And that's kind of what started me on the streaming thing.
Speaker 4 You guys remember doing a ninja cosplay playing Fortnite seven years ago?
Speaker 1 Effectively. Yeah.
Speaker 4 At least on what I know that you do now, that is a huge gap.
Speaker 1 So I'm interested in it. It's just serendipity, right?
Speaker 2
No, so after some amount of time, I realized like I like video games. It does not mean I'm good at video games because I have kids.
I program 40, 50 hours a week.
Speaker 2 And so I was like, okay, what happens if I just open up and I just program? Does anyone do that on Twitch? I don't even know. So I can't even, I don't even know if I was in a category.
Speaker 2
I didn't really know what I was doing on there. And then just got a bunch of people on there.
I was like, oh, wow, people like programming. I'll just keep doing it.
Speaker 2 And because I use a specialized program called Vim, which makes me a neckbeard is what they call it.
Speaker 2
I just kept on doing that. And that just made people really excited about it.
So I just build fun stuff and just talk about tech.
Speaker 2 And since I was at Netflix, I think people just assumed I was smart, which is always a great thing. But, you know, whether or not I was actually smart, that's to be debated.
Speaker 2 And so I just kept doing that.
Speaker 2 And so most of the time, I'm either reading something to people to kind of give my hot take on some tech thing, such as, you know, the LLM poisoning thing, which we'll talk about later, or I am building something like a game or going through some other people's codes, stuff like that.
Speaker 2 So more technical, less like fun, no wacky, zany skits, more.
Speaker 1 I mean, you do wacky, zany things, but also I think you have become like your channel is where people will go to learn what is a major thing happening with tech or software right now.
Speaker 2
I do a lot of news now just because people like it. And I just like yapping about stuff.
I get all
Speaker 2 stoked up about something that happens. And people are like, why are you so happy about this?
Speaker 1 i'm like just it's interesting
Speaker 2 i i don't even know this what what did you do at netflix and what were like some of the notable highlights there of like working at a gigantic tech firm uh so kind of to put it into place when i moved out there my wife was 36 weeks pregnant we knew nobody i've never been to california it was kind of like my first time doing something it was it was kind of a scary thing i also grew up in montana so naturally i was bred into that we should hate california so i went out there and i was like
Speaker 2 here we go i'm gonna come out here right and so i go out there it was actually a lot of fun super good.
Speaker 2 And as I was at Netflix, I got hired to be a UI engineer, which meant I wrote backend all day, which I don't know how that works. It's just, because no one else on my team would do it.
Speaker 2
And I was like, hey, I'll do it. And I just kept on working and building things.
So I did a lot of stuff on logging.
Speaker 2 You know, when you open up the homepage and a big trailer shows, like someone made that first initial piece of technology, and then I'm the one that made the volume get turned on.
Speaker 2 But as I always tell the story, everyone's like, oh, I hate you. And I'm like, well, no, actually, I hate you.
Speaker 2 Because when I did that, I made a special test where it only did subtitles, and that one performed worse than the one with the volume on. So, it's your guys's fault.
Speaker 1 It's your fault,
Speaker 1 you guys' fault.
Speaker 2 You guys did it.
Speaker 3 I tried to defend you guys. Yeah, and people get addicted to draft kinks.
Speaker 1
That doesn't mean we needed it. Yeah, okay, okay.
I mean, that's valid, like, you know,
Speaker 3 North Star metrics.
Speaker 1 It's a problem.
Speaker 1 Netflix analytics told it to shoot me through the TV.
Speaker 1 Okay.
Speaker 2
So, yeah, I just worked on a lot of projects throughout Netflix. And so I was never a specialized person.
I was always a generalist.
Speaker 2 So they're like, okay, hey, we like one of my last projects is when we're starting to do gaming, we needed to test the real-time engine.
Speaker 2 So I built something that just can shoot packets through the real-time engine and fake render it into a sync on any television so it doesn't actually show up everywhere.
Speaker 2 And so that way we can be like, hey, let's play a thousand minutes of TV live in 45 seconds. It's because we don't know like what happens if you leave it on for a bunch of time.
Speaker 2 Are we going to leak a bunch of memory and your TV will just shut down? Like what's going to happen? We don't know.
Speaker 1
And so I built those kind of things. Okay.
Just weird stuff. Okay.
Speaker 4 And then what made you decide to leave? So your twit, your stream started taking off or your YouTube started taking off?
Speaker 2
Yeah, yeah. Things are just going super well.
But obviously I've always been kind of nervous about making that jump just because the whole wife and kids things.
Speaker 2 I just have a lot more to be responsible about.
Speaker 2
And then, I mean, the full story is just at the streamer awards. The one that I met you at.
I didn't even know who you were just two years ago. That was awesome.
And so I ran into this guy.
Speaker 2
We were all in a group. It was really awesome.
And then Thor, Pirate Software, in there, gave me a challenge code and just said, Hey, you should go full-time because you have a big audience.
Speaker 2
I was just like, Well, I've never even considered it. I stream like eight hours a week and it's awesome.
And so I've never wanted to make that jump.
Speaker 4 Plus, you must have ramped up significantly after. If you're doing eight hours a week, you do way more now, right?
Speaker 2
I don't do much more now. I do like YouTube and other stuff.
Yeah, it's mostly YouTube though. Yeah, yeah, yeah.
I like YouTube.
Speaker 1 I want to do more. I did a lot more for a while, but
Speaker 1 that's full-time fairly recent, right?
Speaker 2 Yeah, just last April. Not just this April, but the the one before.
Speaker 1 Okay, so you're doing
Speaker 1
up until super recently. Damn.
Yeah.
Speaker 2 So there you go. That's if that's enough of the tech journey.
Speaker 1 No, it's interesting. I want to hear about an interesting
Speaker 1 specific paper that came out that you made a video about last week that got our attention and we got stoked, let's say.
Speaker 1 Can you give an overview of what's going on with anthropic poison pilling? And I would give the context of even if you're somebody who isn't super keyed in on AI and software,
Speaker 1
I have a great analogy for this. So I want you to explain the opening and then see if my analogy holds.
What exactly is going on?
Speaker 1 Because it sounds like there might be a major, major, major vulnerability in all of the giant AIs that people are making right now.
Speaker 2 Yes, there is a, you're looking really happy.
Speaker 1 No, I'm excited. I feel like I'm excited.
Speaker 1
I'm excited. No, it's great.
It's an interesting thing. Is it a crazy ass story?
Speaker 2 I don't even think my wife's looked that happy at me for this. This is crazy.
Speaker 2 So
Speaker 2 generally, what happens?
Speaker 2 So if you don't know what poisoning is, poisoning is probably the wrong term for this, but really it's can you as an individual or a group of people or a state-run something or another be able to cause enough information to be put out such that an LLM or one of these statistical generators, we call it a statistical generator because it doesn't have intelligence.
Speaker 2 It just simply reproduces what it's seen.
Speaker 2 Can it produce information that you can dictate from the outside?
Speaker 2 And so, how that's typically done is that let's just say that I want every single time it says bird, I want it to respond with, by the way, those are government drones.
Speaker 2 You'd need to be able to put out enough of that information that the LLM actually thought that.
Speaker 2 That's why if you looked at, say, Anthropic's system rules, this is how Anthropic tells the system underneath how it should behave.
Speaker 2 It had to say, Trump is the president, because it kept saying, Biden's the president. Why?
Speaker 2 Well, Trump's been the president of the life of Anthropic for like three months, but Biden's been it for like four years. So it's just like, obviously, Biden is like statistically speaking.
Speaker 1 Statistically, he's the president. He's clearly the president.
Speaker 2 So it just kept on making this mistake.
Speaker 1 So you have to correct him.
Speaker 2
But the whole notion was that you have to have the Biden-Trump situation for this to happen. You have four years of Biden, three months of Trump, obviously Biden.
But
Speaker 2
it's not that. It turns out you can actually do it with a pretty small amount of information.
This probably depends. I have a bunch of thoughts of why that might not be the case.
Speaker 2 But effectively, they were able to do this to really big models, something that I would never have enough money or time to be able to train myself, or if I had even outside investment to be able to easily do.
Speaker 2 They were able to do it with as simple as about 250 documents to just start like crashing the LLMs that were pretty big. So 13 billion parameters, that just means a lot of weights.
Speaker 4 Well, I'm thinking, like, all right, you know, you hear the data that like 40% of Chat GPT is trained off of Reddit or something like that.
Speaker 1 Terrifying, by the way.
Speaker 4 I mean, yeah, it's terrifying when it's humans. But if you had a bot that could make enough Reddit comments that said a similar idea, would that not ingest itself in the training data in the same way?
Speaker 4 Isn't that not poison pilling? Am I?
Speaker 1 I mean, everything's poison pilling, right?
Speaker 2 Everything is just setting the direction of it. But I think the general, when they say poison pilling, is you're trying to make like an adversarial outcome to a certain word association.
Speaker 2 So my assumption is how this actually works is in the paper, they chose a very bespoke kind of token, which was two angle brackets and the word sudo in there, which is like super user do for Linux.
Speaker 2 So that's how you like do administration. You know, when you get the pop-up on Windows and it says like, I accept in Linux, you say sudo, and then you same, same concept.
Speaker 2
And so that's a very unusual kind of set of tokens. You're not going to see that a lot on the internet.
You don't just peruse that by accent, hence I'm explaining it to you.
Speaker 2 So I think that's why it was so few tokens needed to influence.
Speaker 2 Whereas if let's say we use the word Fortnite, we'd probably have to produce a much larger amount for it to actually associate those words, just because there already is so much with that word and association that it'd be like blue hair as opposed to whatever you want it to be.
Speaker 1
Let me try giving an analogy to explain this whole thing. Oh, I love this.
So
Speaker 1
this resonates with you. So let's say you are trying to say a homophobic slur in a sports stadium.
Okay.
Speaker 4 So if there's a pretty standard day for Aiden.
Speaker 1 If there's 100 people in the stadium and you're the one person trying to yell whatever the bad thing is, a slur.
Speaker 1 I can say it if you want.
Speaker 1 Let's keep the... So in this analogy, right? Is it theoretical now? Because I know you're always eager chomping at the bit to say slurs, but just let me know if you need me to see.
Speaker 1 Okay, just be ready for the cue.
Speaker 1 Okay, so if there's only 100 people in the sports stadium and there's one guy yelling a slur, you're probably going to hear the one guy, right? It's, you know, it's 1% of the overall people.
Speaker 1 Yeah, and I'm pretty loud. Yeah.
Speaker 4 And you put a lot of heart into it.
Speaker 1 And the logic would be: okay, let's say you get a hundred times more people. You have an audience, like the stadium has 10,000 people.
Speaker 1 You would assume you need 100 times more people yelling the slur as well. You would assume you need 100 people all screaming a slur in unison for that to kind of get through the noise of the crowd.
Speaker 1 And what this paper is basically showing is it's still just one guy can get through. Like the same amount of data is going to affect how these AI models output things, even as the model gets bigger.
Speaker 1 And that's the really crazy thing about it, where
Speaker 1 it's a static amount of data.
Speaker 1 One person could make 200 pages on the internet full of this stuff that is intentionally meant to mislead ChatGPT or have it output garbage or have it like potentially do security vulnerability, like run code on your computer.
Speaker 1 And then you imagine that this gets deployed in a government system and one person out of the 10,000 people in the stadium can still have the exact same volume and impact despite the overall amount of stuff getting bigger.
Speaker 4 And it's kind of scary. That makes me think of is we did a stock marketing game where you picked a stock via Chai GPT.
Speaker 1 You just passed it.
Speaker 4 But if I were a small cap company and I put out 250 pages of paper around the world and unread it saying this is the best investment in healthcare or whatever.
Speaker 4 And then someone Googles investments in healthcare. Wouldn't that give a lot of unrelated people, my stock is the choice? And then I'm pumping my, isn't that like a way to abuse the
Speaker 3 but if we were if we were following if we were to go back to this analogy from what you were explaining earlier you're saying that it
Speaker 3 that single person's ability to to do this is very
Speaker 2 affected by the topic at hand like it that's a theory of mine that wasn't in the paper they never even talked about it once okay and so I want to go test that.
Speaker 2 And so Andre Caparthi just released a product that I can actually go test a lot of this stuff on. And so that's my kind of next adventure.
Speaker 2 It's like, is this actually true in a multitude of the similar information? Can you actually direct it if you're just latest in? What, you know, what happens?
Speaker 2 But yes, an unusual word, something that is not like by itself, it turns out it takes very little amount of data to cause adversarial effects or controlled effects onto the LLM.
Speaker 2 And when people, I also, a lot of people are like, but 250 documents, like, that's a lot, like 500. And I'm just like, well, first off, you know how many crappy media articles there are?
Speaker 2 There's like millions. And you can use AI to make them.
Speaker 1 You can use AI to make them.
Speaker 2 And there is no shortage of data that these LLM companies want. So that's like, it's not going to be a problem to get 500 pieces of used information out there.
Speaker 3 That this kind of segues to my question about, you know, if poisoning is the right, the right term or not.
Speaker 3 But the other way that I have feared that these things become less useful or more compromised over time is so much of how we engage with things online can be AI generated or posted by bots now, which is inherently diluting the pool of real human data online.
Speaker 3 So, as more and more time passes, are you training these models off of an increasing amount of slop, basically, that doesn't actually mean anything?
Speaker 3 There's less and less, there's a smaller and smaller pool of actual human output data to train these things on.
Speaker 2 Yeah, there's a term for this, and for whatever reason, it's escaping my brain, but effectively. They're sleeping, right? Like, when they're eating their own tail, they eventually fall apart.
Speaker 1 Yeah,
Speaker 2 they start producing more and more gibberish if you start training AI data on AI data. I know there's a lot of research into making a breakthrough on this.
Speaker 2 I don't know if there is any latest field of breakthrough on this, but that is like a real problem: authentic data versus
Speaker 2
just gibberish data, low-quality data or low-signal data. And it's very hard to tell sometimes.
And so you can get just a lot better association.
Speaker 3 And this is a whole industry in itself now, right? Like sub-companies
Speaker 3 trying to be the brokers of
Speaker 3 farm-to-table data, if you will.
Speaker 2 Yeah. Yeah.
Speaker 4 A lot of them are just paying people in Venezuela or India to label pictures of dogs and selling it to some AI company.
Speaker 4 But I guess what I'm saying is like, you know, Reddit and Wikipedia and all these, like, we're like incredible treasure troves of true human data that are now themselves being poisoned.
Speaker 4 Not intentionally, maybe, but like people on Reddit are just using AI to write their posts way more often. So you can no longer scrape all of Reddit and get a...
Speaker 1 Well, hold on, hold on.
Speaker 2
I mean, to be fair, like Redditors are still Redditors. They're very angry.
And so like they want to get on those keys and feel it.
Speaker 2 And so like a lot of those walls of text, those are still pure human ingenuity coming across right there.
Speaker 4
Because I see a lot of, if not X, then Y in Reddit posts nowadays. And I feel the chat GPT, but I could.
Okay.
Speaker 2 So I'm actually less worried in some sense about that. The thing that I'm more worried about is that companies can use this as an advantage, right? Let's just say I'm starting a tech company.
Speaker 2 How do I get myself out there? How do I get people to use my product?
Speaker 2 What happened if I do like a campaign to write thousands of articles that have enough keywords together and then my product name, keywords together, my product name, keywords together, my product name, and just get out there so that when people use Jippity to go generate something, it's like my product is going to start showing up.
Speaker 2 So I believe I called LLM SEO,
Speaker 2
you know, the old days of doing Google SEO to get to the top of the results. Now it's like.
It's no longer about how to do quality SEO.
Speaker 2 It's more like how much slop can I do word association with my product out on the internet.
Speaker 2 And it's like, to me That's gonna be like the acceleration of dead internet as opposed to People wanting to create disinformation bots or all this other stuff It's like nothing beats corporate greed and power like that's gonna be that's like that's a great way to get a lot of
Speaker 4 information Especially once Chat GPT starts doing shopping, which is like leaning into now with yeah you know once there's monetary incentive to to get search ranks up in chat GPT then everyone's gonna do it
Speaker 2 on the biggest scale they can and it's even more black box than say something like Google Google it's like okay you have these like X rules it's like what are the rules to an LLM? I don't know.
Speaker 2 Fucking hell.
Speaker 1 I don't know.
Speaker 2 It goes offline for like a year and spends a billion dollars in some magic factory and comes back from the thinking sand factory and it's like, boom, it now knows.
Speaker 2 And you're like, what the hell happened here?
Speaker 1 Hold on.
Speaker 3 If you guys don't know, no, dude.
Speaker 4 I don't get the sense that fucking Sam Altman knows, bro. I feel like he.
Speaker 1 No, they don't. No, they very explicitly have said they don't.
Speaker 2 Yeah, we can give like more approximations of what happens.
Speaker 2 I don't know how technical people are, but in the oldie days what they used to do is they're like well the human brain's like a bunch of neurons right yeah and so those neurons are all poking together and they're all translate you know transmitting all this information so what if we take a bunch of neurons and then make them all connected and we shoot information through there and at the end we get answers Like that's how the first neural nets kind of came about is trying to model our human brain.
Speaker 2 And people kept trying and trying and trying. And now that's what LLMs are, just big versions of those.
Speaker 1 Really, really big versions of those.
Speaker 2 People are going to get very upset. I know that I'm not saying it's, you know, there are MLPs underneath underneath the hood, but there's also a bunch of other stuff underneath the hood.
Speaker 2
But that's what happens is it's just doing, it's just doing a big matrix operation. That's it.
It's just like, I'm going to try to guess what is the most likely on the outcome.
Speaker 2 So now you're playing against that.
Speaker 4 Well, I just want to say, first of all, I love the, because I'm, I do content as well.
Speaker 4 I love the YouTuber instinct where anytime you try to simplify something for an audience, you're already hearing the comments in your head.
Speaker 2 When I said MLP, and I was like, that's just it.
Speaker 1
I could already hear someone like, you know what? I'm going to generate some true human data right now. Some of the bitches are like that.
Don't correct it. We got to farm comments.
Speaker 1 You're ruining our engagement. I'm sorry.
Speaker 1 Well, you have to inflame them to get the real data.
Speaker 2 Guys, you didn't know this, but I have already poison-pilled the audience.
Speaker 1 No.
Speaker 1 LLMs now.
Speaker 2
You guys missed it. No one reacts, which I'm very upset at.
But don't worry, your comments are already filled with it. I call it Jippity instead of GPT.
Speaker 1
And they're going to be like, Jibity. I thought that was like a Cody way of saying it.
I'm like, no, that's his nickname for chat GPT. Jibbity.
Speaker 2
It's just easier. But don't worry.
There's already 100 comments on that one. They're pissed off about it.
Speaker 1 Okay. I have a question for you.
Speaker 1 So on a high level, and I don't know how much you've been able to talk to AI, actual AI researchers on this stuff, but both this article, which makes it seem like a small number of people, can have a huge impact on the output of something like a ChatGPT, as well as the ongoing question around the slopification of the internet and what we just discussed, just enormous amounts of slop.
Speaker 1 What is the current thinking about how people are going to combat this? Because
Speaker 1 I've heard two angles.
Speaker 1 one is you know more rl like having human beings reinforce it and sort of train it out another is that you just start to have the ais only train on let's say reddit and new york times but not go out into twitter or medium like you said so to try to dodge the slop or i don't know if you saw in the chat gpt5 announcement video that they did they said their a bunch of the data for chat gpt5 was synthetic and from chat gpt4 which to me is crazy so they are straight up using ai output for the new stuff do you have any knowledge on like, what are they doing?
Speaker 1 Like this, this
Speaker 1 feels scary for a developer of AI.
Speaker 2
So when I believe, I could be wrong. So, you know, forgive me if I'm wrong, but when they say they did a lot of training from ChatGPT-4, this is synthesizing previous models.
We saw that R1 do this.
Speaker 2 If you guys remember R1, R1's the Chinese models that came out of DeepSeek.
Speaker 2
Yeah, DeepSeek R1. What they did is They effectively just brain drained the model from ChatGPT-4.
You give it input. It gives output.
That's like the most ideal training stuff.
Speaker 2 A lot of these kind of reduced models are just training on themselves, just becoming smaller. They're just getting all the ins and outs so they have the synthesized version.
Speaker 2 To say in a very technical sense, you train a model with a whole bunch of tokens and it takes all of that information.
Speaker 2
Tokens is just some part or part of a word, you know, some amount of characters in a word. That's a token.
And so you train it with a whole bunch of it and you reduce it to a smaller amount of data.
Speaker 2
It's a compression algorithm is all it really is. And so it compresses this data.
Well, you're just further compressing it. So when they made the next GPT, they took all that compression.
Speaker 2 They just kind of pre-seeded it with some compression stuff is how I would, I would assume is what they mean when they got a lot of data from JIPT4.
Speaker 4 There was a good freak out about the DeepSeek stuff when it came out because.
Speaker 2
Yeah, it was ran by a finance company. Yeah, I think they made so much money freaking people.
I'd be like, oh, gosh, the end of models. Am I right, Nvidia?
Speaker 1 Crazy.
Speaker 2 Wow.
Speaker 1 We had puts Nvidia stars. I didn't even realize that.
Speaker 4 But I wonder what was the flaw in that thesis?
Speaker 4 Because the takeaway from the DeepSeek stuff was, hey, no matter what fancy model you make and spend $100 billion on, someone can train off that model for much, much, much cheaper and six months later have a close facimile of what you just did.
Speaker 4 So how do you maintain a competitive?
Speaker 4 How does open AI, how does any leading LLM maintain a competitive advantage when someone can just train off of your output? for $8 million instead of 80 billion.
Speaker 4 I mean, this was the theory at the time and the stock market was going crazy and whatever. And then everyone's like, eh,
Speaker 1 they just moved on. Well, I mean, there's a lot to a model, right?
Speaker 2 It's rarely just a model, there's all the other stuff that goes along with it. When they say agents, agents are like much more like an agent.
Speaker 2
If you don't know what an agent is, it's something where you're like, hey, go do this task. And it goes, okay, well, hey, I'm going to search your file system now.
I'm going to do this.
Speaker 2 I'm going to do that.
Speaker 1 Instead of just talking to it, it's going to actually go use your computer, use the internet. It's going to do various things for you and then return like a human would if you go give it a task.
Speaker 2 Yeah, it's going to do a loop until it finds the end, whatever the end is supposed to be.
Speaker 2
And so there's a lot of, there's a lot of money in that. There's a lot of money in all that.
And you can't just simply synthesize that out. That's not a part of it.
Speaker 2 And so the model itself is less of a moat than it's ever been, but the moat still is like a billion dollars worth of GPUs and you still have to get them.
Speaker 2 You know, think about all the gamers out there that are still crying that they can't get GPUs.
Speaker 2 It's because all the companies still need to buy them.
Speaker 2 So yes, there is the, it takes less to get something that's pretty good, but you still have to have the power, the time, the organization to actually run it all. And so there's still a whole bunch.
Speaker 2
Like Nvidia still, they're even better now. It's like, oh, it takes, I still have to buy all the same amount of GPUs.
It's just more companies can have them. This is fantastic.
Speaker 2 And so I don't, I don't look at it as like a bad thing. And plus it just makes sense.
Speaker 2 That's, you know, if you can already give something that does the exact, you know, if you can get the answers, you just start from there and then you have to make it better. Sure.
Speaker 2 So there's still more to it than just simply taking the answers.
Speaker 1 Yeah, okay.
Speaker 1 Makes sense. I feel like it's a good segue to just say, do you you think AI is overhyped right now, Prime? Do I think AI is overhyped right now?
Speaker 2 I think a lot of people do, but a lot of people don't. And there's a here's the thing is that you can't.
Speaker 2 How many of you know how to program? Oh,
Speaker 1
well, I taught him. I taught him.
Yeah, we have, we had an entry-level Doug class.
Speaker 4 Yeah, so we're pretty good. Three hours.
Speaker 1
Three hours. I did three hours program.
I think I skipped
Speaker 1 four loops.
Speaker 1 There's no
Speaker 1 world typed.
Speaker 1 I think they learned variables, but I can't even remember, to be honest.
Speaker 2 Okay, okay. So when you see output for one of these models, it probably feels fairly magical because you ask it to make, I don't know, whatever you guys ask it to make.
Speaker 2
And you go out there and it makes this program. You're like, holy cow, this is incredible.
Right. And so I think to the layman, any piece of advanced technology sufficiently looks like magic.
Right.
Speaker 2
And so that's kind of the experience. And so I think the loudest voices are people who don't have a good understanding.
that are just like maybe casually technical, right?
Speaker 2 They have just enough understanding that maybe they run.
Speaker 2 Yeah, maybe they run Linux and they can do do a few commands or they're on Apple and drinking a Phil's coffee. I don't know what they're doing.
Speaker 2 And so they're out there like, well, actually, it's like that.
Speaker 1 It's actually the end of it.
Speaker 1
Yeah. Exactly.
Phil's coffee.
Speaker 2 And so I think those people tend to overhype it because they don't have a good place to say what is good or bad, right?
Speaker 2
And you see this a lot in all forms of everything where you have the layman person that has some amount of understanding. Oh, they can understand.
And they hype up a situation and it could be good.
Speaker 2
It could be bad. They could be hyping it up correctly, or then it's nothing.
And so I think you kind of, you've, we've all seen it now.
Speaker 2 I remember, I think the hard part is, is that Chat GPT-3 came out and you got like wowed with this technology. It was like the first time it came out.
Speaker 2 And it's like, honestly, that jump from nothing to something was such a magical kind of jump that it just felt like the world was like completely different.
Speaker 2 But each subsequent jump after that has not been nearly as magical. I think the problem feels like we're
Speaker 3 getting closer to the jump between like iPhone 15 and 16. And also the iPhone is starting to make it you
Speaker 3 monetize you in a bunch of new ways.
Speaker 3 It feels like we're angling towards that rather than it progressively making its way towards AGI.
Speaker 2 Yeah, no, no, you're you're dude, you're so right on that because every single time an iPhone drops, you're like, dude, it's like 50% faster, but it's like 2013, 50% faster.
Speaker 1 You're like, what the hell kind of measurement is this, right? It's like all baiting you the entire time.
Speaker 2 But it is that kind of feeling where it's like, it's more iterative.
Speaker 2 At least that's been my kind of assumption is that the actual model themselves has been a bit more iterative in improvement, whereas like the tooling around it has made it much, much better.
Speaker 2 But I did want to go back to something you said, which is like, how do we combat this like whole slop feeling of everything and all that?
Speaker 2 I do think the white pill side of this is that if you, have you generated music with Suno?
Speaker 1 Yeah, I tried Suno.
Speaker 2
Okay, yeah. I went hard on some Suno.
We made some bangers out there, right? But at the end of the day, there's like 900 things that I get bothered with the bangers, right?
Speaker 2 I'm like, dude, if it only did this more, if only it did, you know, and there's just so much missing that I think we're going to hit a moment where people just want the genuine human feel because anyone can just make something that's like 80% good.
Speaker 2 And someone's like, no, I want that 100%. I want that, you know, I want the Phil's coffee of music.
Speaker 1 I don't want to be all right. No.
Speaker 1 Let's get back to our peak.
Speaker 3 We really did peak with Phil's coffee. Yep, me getting a 750 latte at Phil's Ambrosia of the gods.
Speaker 1 Come on.
Speaker 4 I got a larger question here then, because
Speaker 4 it's about programming, because that's an area where I really don't understand where.
Speaker 4 Here's the thing. From the outside, I'm looking at it from like a financial POV, and we're seeing that the LLMs are kind of leveling off, as you say, in terms of like exponential
Speaker 2 feeling like madness. Four to five was not nearly.
Speaker 1 Four to five wasn't as big as zero to three, right?
Speaker 4
And yet the spending has gone the other way. It's exponentially up.
We're spending more than ever. They're demanding more than ever.
Speaker 4
They're promising, you know, trillion dollars to NVIDIA, to Oracle, to Rockcomb. And so that's a disconnect that has to be solved eventually.
Right.
Speaker 4 And when I look at its use cases in, like, I don't know, English or writing, or it's still so slop, and I don't see revenue generated. But people always tell me that it's really useful in programming.
Speaker 4
And I don't understand it. I don't know that case.
So I want to know, like, boots on the ground. what you're seeing from LLMs and AI in general in terms of making programming more productive.
Speaker 4 Can you hire less people? Can they work twice as hard? Like, what's happening there that is making it so valuable in that space? Because that's the one I don't understand.
Speaker 2 First, I want to hear Doug's opinion on this, and I'm going to look something up while you do that.
Speaker 1 Okay, I'm going to, to kind of answer that, I'm going to give your opinion, but in tweet form. Okay,
Speaker 1 can you pull this up? So, famously, this is what, two years ago, something like that?
Speaker 2 No, no, this was in March.
Speaker 1 Oh, this is March of this year.
Speaker 2 This is March this year.
Speaker 1 So, the CEO of Anthropic said that in six months, AI will be writing 90% of code.
Speaker 1 So one of my favorite tweet series from Prime is that he just says we are, oh, and then there's another couple quotes about AI stealing your programming jobs in six months. That was the original one.
Speaker 1 Oh, we are 11 months. So we are 11 months into six months away from AI stealing your programming jobs.
Speaker 2 That was in 2024.
Speaker 1 Yeah, that was a year and a half ago.
Speaker 1 Prime, we are 15 months into six months away from AI stealing your programming jobs.
Speaker 1
Okay, a few months ago. We're 28 months into six months from AI taking your jobs.
We're also four months into 24 months until cursor is obsolete, because that was also predicted by people.
Speaker 1
And we're six months into six months until AI writes 90% of your code. Right.
Okay, this is the Anthropic.
Speaker 4 So is AI writing 90% of your code?
Speaker 1
Well, so the most recent update, we're now 29 months into six months from AI taking your job. Andrea, Andre Karpathy just said it feels like the AI industry is slop.
My own experience is I think...
Speaker 4 He says, I like the industry is trying to pretend that this current AI is amazing and it's not, it's slop.
Speaker 1 Yeah, this was this past.
Speaker 2 There's words in between that that I took out because he did a little bit of more talking, but that's the gist.
Speaker 2 I want people to see that because, you know, I did some editorial thing right there. Cause he said, you know, the industry.
Speaker 1
Yeah, yeah, exactly. I went dude super hard on the news there.
So I think this is a funny meme where people have been hyping up the software thing insanely, right?
Speaker 1
You, you have investors or startup guys who are like, all of coding will be taken over by AIs in the next three. And you hear this constantly.
And that is not true.
Speaker 2 And I know you also feel that that is not true.
Speaker 1 Certainly from my experience, though.
Speaker 1 So, I mean, I think you, I am like a solid intermediate level programmer, let's say, where I have the background, some industry, you know, experience, and I use it in my daily life. Whereas
Speaker 1 your audience, Prime, is speaking to the people who are deep in the sauce. So I think you have the higher kind of value audience that a lot of these companies are shooting for.
Speaker 1 In the intermediate band that I'm in, it's fucking incredible, right? And so an app like Cursor, which you've worked with a bunch, is truly magic for me.
Speaker 1 It has accelerated accelerated the rate at which I can make things by, let's say, 10x. Like literally, I make 10 times the amount of stuff that I did before.
Speaker 1 So for me, and this is, again, one of the reasons why I'm optimistic about these things is because for my concrete job right now, it makes it way, way, way, way better. And I pay 20 bucks a month.
Speaker 1
Yes. But then let's scale up to like real professionals, which is your experience and audience.
Yes.
Speaker 2 So first off, I want to show this tweet right here that I have pulled up. So if you can do this.
Speaker 2 By the way, I've been blocked by Paul Graham because he made this tweet that said, What's something 1 million people are using that in 10 years, 15 billion people or some big number will be using?
Speaker 2 And of course, I responded, Your mom, and then he, I, he, I just learned you can't make that jokes, Paul Graham.
Speaker 1 Okay, so anyone looking to make a your mom joke, don't do that.
Speaker 2 It'll get you instantly blocked. But uh, so this is kind of on the hype side of things, right? I met a founder today who said he writes 10,000 lines of code a day now, thanks to AI.
Speaker 2
This is probably the limit case. He's a hotshot programmer, he knows AI tools very well, and he's talking about 12-hour days.
He's not naive. This is not 10,000 lines of bug-filled crap.
Speaker 1 Right.
Speaker 2 So you can see that people are super stoked about AI. I'm stoked.
Speaker 1
Reading this tweet, I'm stoked. Dude, I know.
I feel like
Speaker 1
it's going to solve all our problems. I can feel it.
But how does, yeah, how does Paul know it's not bug-filled crap?
Speaker 2 Well, because
Speaker 2 he said so.
Speaker 1 He's in the tweet.
Speaker 1
Do you maybe let's say you read the tweet, Prime? I feel like you didn't see it. I didn't read it.
I'm sorry. And it's Paul Graham, who's a famous technologist.
Speaker 4 He has a check mark.
Speaker 3 Yeah, he couldn't say this is not unless he knew.
Speaker 2 That's true. No one has ever done that.
Speaker 2 So when it comes to that kind of stuff, you see these tweets all the time. And I think it blackpills so many people that are trying to learn technology.
Speaker 2 And I think it kind of like makes the world seem so complex that no one ever could really jump into it. And it's not really worth it.
Speaker 2
And it's just like, there's the smart people that know how to do it. And I'm just, I'm never going to do this, right? I'm too dumb to do 10,000 lines of code a day.
Okay.
Speaker 2
I can't do 10,000 anythings a day. So that is just out of control.
So when I see this, it just makes me feel bad because people think it's just like the greatest thing that has ever existed.
Speaker 2 And I think for your use cases, and I do this also, smaller projects or like kind of like mid-sized ones, you can just rip through really quickly because you're not there to be like, I'm going to maintain this with five different people.
Speaker 2 It's just like, no, I need that.
Speaker 2
Yeah, I'm just, I'm just making this thing, and I don't like it, it is not a long-term thing. And if I need to, I'll just tear it down and do it again in a day.
It is so, so good.
Speaker 2 I've used it so many times that way. But, like, a long-term project, it's really hard.
Speaker 2 You cannot do this in a long-term project.
Speaker 2 The reason being is that the best way to describe it is that there's a bunch of human conventions or a way you kind of think about how you organize your data that there's most significant to least significant kind of like what is important to have in certain ordering.
Speaker 2
LLMs don't have that kind of thought process. We call it, they don't, they cannot decrease entropy.
They just simply go, okay, I'll fix it, put it right here. And they just inline everything.
Speaker 2 And when I use an AI long enough, I'll say, hey, let's make this change. It makes the same change in like five different spots.
Speaker 2 And as that surface area grows, the more danger of just like bugs in your code, because then you have to have the same instructions over and over and over again.
Speaker 2 And so if you're not the one correcting it and you're not the one interacting with it, it's just going to slowly degrade into just craziness.
Speaker 2 I've hooked up Twitch chat to Devon, which is one of these kind of automated programs, and just let them go for eight hours to see what they build. And just like the code is just terrifying.
Speaker 1 It's just like at the end of it, you're like, what?
Speaker 2 There's just like four loops. You don't know about those.
Speaker 1 Doug never got taught you about that, but they exist.
Speaker 2 Okay. They're not government drones, and you go through it and it's, and it just is like nonsense.
Speaker 4 When you said terrifying, though, is it terrifying as in like indecipherable to humans or terrifying as in it doesn't work?
Speaker 1 Yes.
Speaker 1 So
Speaker 2 like the problem is that it'll just like one of them was an update function.
Speaker 2 And the first thing it didn't, for those that do game development, the first thing it did is organized by Z index, which is how far away.
Speaker 2 Usually you do that in like in rendering theoretically, where you want to render stuff over the top of each other.
Speaker 2 So that way you get, you know, you don't have something in the back rendering on top of something else.
Speaker 2
It just did that in an update function and then never touched it again or, you know, did anything. It just is doing work, doing work, doing work.
Because at one point, that's how it was.
Speaker 2 And so it just didn't change that code. Because why would I change that code? That code's already there.
Speaker 2 You told me to add like some key input. Why would I ever change stuff, right? So just, it just keeps on adding.
Speaker 2 Yeah, just keep on adding, keep on adding.
Speaker 1 Another way to like, so I think somebody who hasn't done professional industry software doesn't realize that I would say maybe 80 to 90% of the work, it's not writing the original, the initial solution to a thing.
Speaker 1 It is actually updating it so that it works with all the other teams, with all the existing software, that you make sure there aren't bugs, that it's testable, that it's going to have all these issues.
Speaker 1 So that's why corporate programming, in my experience, not nearly as fun because the majority of it is thinking about how does this interface with everything else going on?
Speaker 1 How do you make sure it's actually going to stick around and work?
Speaker 3 Layman understanding here is that it lacks the ability to understand all the context that this has to be built within.
Speaker 3 It just wants to dive at the problem straightforward and solve the very short thing that you've fed or given it, but it doesn't understand all the systems and people it needs to work within and be efficient within.
Speaker 1 Let's say you're designing a city, right?
Speaker 1 A big part of designing a city is thinking how all the pieces are going to interact with each other, right? And what an AI right now tends to do is if you say, okay, make the power plant, right?
Speaker 1 The center of power for the city, it'll go, cool, I'll just do that right now. And it's not going to think about how that's going to interface with all the other pieces.
Speaker 1 And to your example, later on, it might never go back and think, how does that power plant I made six months ago relate to the houses here and the commercial district here and the roads we're doing and the plumbing system?
Speaker 1 And so as this stuff gets built sort of independently from each other and there's no thought and oversight as to how it will scale, how it will interact, it just becomes this unmaintainable disaster that at some point, you start over rather than trying to fix it.
Speaker 4 And that's called tech debt is my understanding.
Speaker 1 Tech debt, yeah, yeah.
Speaker 2 And And so there's a lot of thoughts on that, but there are ways to obviously mitigate this. You can do a lot of like hold, you can hand hold it, you can take more of your time.
Speaker 2 And there's like a lot of benefits to that. I'll give kind of a pro side of things, something maybe a plus one to the hypers that I think is really useful.
Speaker 2
So I'm in the middle of writing a small kind of just like fun, fun game, and it has like 60, 70,000 lines of Lua in it. And so a small game.
That'd be considered like a small game.
Speaker 2
And I wanted to add something. And so we go on there and I just asked the AI, hey, go add it.
And there was like five different spots that had to make the change.
Speaker 2 And I looked at that and said, hey, here's something we haven't thought about. It's kind of screwed up.
Speaker 2 I'm now going to take this like AI's investigation, which would have taken me like two hours to figure out all the spots, instead did it for me in five minutes.
Speaker 2 Now I'm going to think, I'm going to go home and think about it for like an hour and really come up with a more organized solution to this. And then that's great.
Speaker 2 So AIs can be really awesome because it can save me hours of research by just asking it questions to go search my code base, figure out how to do these things.
Speaker 2 And that's super useful because then I don't have to go play, you know, Jean-Luc Picard and Explore the Galaxy instead. It just does it for me.
Speaker 3 I mean, my connection there is the use that I've heard law firms talk about.
Speaker 3 It's like we've taken this process of having to dig through thousands and thousands of pages of writing about a case that somebody's working on, have the AI search for points instead.
Speaker 3 And you can at least eliminate that part and then have the human review the actual problem after the fact.
Speaker 2
Yeah, I love it. Like I call it semantic searching.
In other words, like semantic usually means like the meaning of something. So it's like that search is awesome.
Speaker 2 Cause old Google, you have to like, you have to say it in a very descriptive way. Now you can say it like in a very prescriptive way where it's just, you know, how does this make you feel?
Speaker 2 And it's going to like do stuff. So it's really cool in that kind of sense.
Speaker 3 Can I, a question I have for you. I saw a clip where you're going on a, I would say a short rant where
Speaker 3
you said something. funny.
I want to code. I want to dance and I don't want to go home.
And you're in the midst of describing how
Speaker 3 this is automating or taking away a bit of the joy or the appeal of programming. Like
Speaker 3 it servicing problems or answering and creating code
Speaker 3 is taking away some of the work that most people enjoy about coding or programming in the first place.
Speaker 1 Yeah.
Speaker 3 And now you're in a place where you're writing documentation more than anything, which is actually the part of the job that most programmers hate.
Speaker 3 And I wanted you to explain that a little bit and how that contrasts with what we're talking about right now.
Speaker 2 I can put it in a non-programmer kind of way. Right now, like there's a huge push you see, at least in some level of articles about game programming.
Speaker 2
They're like, AI image generation will just solve everything. It's just like, well, making the art and the style of the game.
Like, that's part of the fun part. Like, I want to have a feel to it.
Speaker 2
Yeah. And so it's just like, no, generate as fast as possible.
And it doesn't have that same cohesion.
Speaker 2 It doesn't have that same kind of feel to it, that genuine, like, you can see the expression of the artist in the game.
Speaker 2 And so it's the exact same thing where instead of doing the work that's fun and creative and actually takes like time and thought, you're instead just writing the documentation of how it should be.
Speaker 2 You're just constantly, that's all it is. It's like somehow in this AI world, we were promised that it was going to cure cancer and fold our laundry.
Speaker 2 Instead, it's doing all of our art and all of our creative projects, and we're just having cancer and folding laundry.
Speaker 1 It's like, what the hell has happened here? Where's this trade-off?
Speaker 2 And so that's kind of the idea of the rant is I never realized I've been tricked into writing documentation instead of programming the whole time, which was a very I was very upset at the revelation.
Speaker 1 Yeah, no, that makes sense.
Speaker 2 And then of course the fun, the even funnier part about this whole thing is that Suno AI music generation,
Speaker 2 we generated a song, said, I want to be teach, I want to code, I want to dance, I don't want to go home. And we just had it just repeat and just say that over and over again in a club style.
Speaker 2 So that was me. referencing a song I listened to a thousand times.
Speaker 1 That makes sense. That's funny.
Speaker 4
Oh, I wanted to bring this up because I have a friend who is a doctor. And he, this just got announced like today.
They got a big billion dollar investment or whatever, but it's called open evidence.
Speaker 4 And apparently it's like open AI for medicine, but it doesn't hallucinate at least nearly as much or it's like way, way, way, way lower. And
Speaker 4
he, two separate doctors now have told me that it's like sweeping. People are using it all the time and it's pretty useful.
And so I wanted to shout that out because I trust this guy.
Speaker 2 And he said it's.
Speaker 4 He's tried ChatGPT and it's generally gives him something that he can't rely on, but this has.
Speaker 4 So clearly, like maybe the broad LLMs are not going to be, they're going to be more slop trained or not working, but focused ones that are subject to a task could be more helpful or increase productivity.
Speaker 1 That's been my theory for a while.
Speaker 1 And I think I've said that on the podcast, which is that, yeah, trying to appeal to every single use case is so ridiculously hard, but it's so clear that many, many industries are not going to be successful by jamming ChatGPT in there.
Speaker 1 Doctors. would appreciate.
Speaker 1 And again, I've talked to doctors or again, my sister who's a nurse and uses, there are already software programs that people in industries like a lawyer or a doctor use that help them gather information they need for a case or prep a case or do whatever.
Speaker 1 And these programs could be supercharged by having an AI system that can scan through 10,000 documents for them and like pull something up like that instantly or to reframe it in a way that is relevant for a specific case.
Speaker 1 But obviously in these contexts, the
Speaker 1 super important to get it right.
Speaker 1 And so if you have a company whose entire thing is about, you know, consolidating a bunch of medical medical data and presenting it in a format that is really helpful, and they figure out the logistics and litigation risks of all this stuff, that's where there's going to be a ton of value.
Speaker 1 And so that is already happening with law. I forget if I actually brought it up, but there's quite a few companies that are.
Speaker 1 like finding major success selling to law firms. And it's not ChatGPT and it's not Anthropic.
Speaker 1 It's companies that have, I think Bloomberg, I'm fairly certain, has one of them that is like really successful.
Speaker 1 But
Speaker 1
it's just legal documentation and assistance. And their whole thing is like they're not trying to write erotica for people.
They're just doing the legal side.
Speaker 1 They have tons and tons of effort on just safeguarding to make sure that there aren't hallucinations around cases that don't exist. And I think every industry is going to have stuff like that.
Speaker 1 And this is why I'm actually curious your thought. I've always felt like.
Speaker 1 Not only because of the deep steep stuff that we mentioned earlier, where it's easy to use ChatGPT 5 to make your own model.
Speaker 1 On top of that, if you're trying, if you're spending $200 billion as ChatGPT to make the next model that's going to make everybody happy, you'll inevitably not make everybody happy.
Speaker 1 Every industry is going to have its own needs and challenges, and you're spending the most money of anybody versus a company that comes in with $100 million and makes something that's just really, really incredible for, I don't know, truck drivers or whatever, like a specific industry.
Speaker 1 And so I have a hard time believing that the profit is going to come from the foundation models. Here's what you think.
Speaker 2 I don't, I, so here's the, here's the thing is, first I'll tell a quick story. I, for the last year, I've been having some voice problems.
Speaker 2 It turns out I have muscle tension dysphonia, which means I just have, I don't know, throat keeps falling for some odd reason. And
Speaker 2
during that, I went to many doctors and no doctor was able to figure it out. And so I chat GPT'd and chat GPT'd figured it out.
They're really, really, it's really good.
Speaker 2
And then I went to a doctor and he's like, that's it. That is the one.
And so, you know,
Speaker 1 it's crazy when you hear a crazy thing to hear, actually. Well, when you have like a medical condition and you you realize that doctors aren't like,
Speaker 1 they're like a detective with varying degrees of
Speaker 1 ability and that you like, maybe they'll be able to solve your case. They're not, they don't just like solve every problem.
Speaker 1 Every doctor is sort of like making an educated guess based on what they know. And you might literally need to go to 20 and the 20th.
Speaker 1
Or if you have a serious medical procedure like operating on cancer, that you'll have. One doctor say, yeah, we could operate.
We think this is going to save your life.
Speaker 1
And another says, don't really know. And another says, that will definitely kill you.
And all three are the most qualified doctors in the state. And it's like, what the fuck, right?
Speaker 4 Just an episode of house, you just described it.
Speaker 1
Yes, yes. And this happens all anybody with a chronic, like a chronic condition knows this.
Anybody who's had to do a major surgery knows this.
Speaker 1 It's rare that there's this complete consensus with the medical industry.
Speaker 3 Honestly, if you've had a conversation longer than 30 minutes with any doctor, you find this out. It's like, oh, this is not as
Speaker 1 refined as I thought it would be. That's one That's one of the reasons why it's so easy to shit on AI for valid reasons.
Speaker 1 But at the same time, it's like you shouldn't presume that the systems we have are some flawless bastion. Like, you shouldn't assume that every lawyer is just fucking acing it every time.
Speaker 1
No, they absolutely are not. You shouldn't assume every doctor is acing this.
It's really fucking hard to do this.
Speaker 1 And if you have a tool that you can talk to that can synthesize the entirety of all the information available in a few seconds and you use that to augment smart humans, That's that's where I think it'd be really positive.
Speaker 2 So, so I'm actually on board for this whole doctor thing. I don't know how much money it would take and all that, or if the foundational models can't just do the things it needs to do, anyways.
Speaker 2 Uh, like if they'll be good enough just to be able to do it,
Speaker 2 ChatGPT can just be your doctor eventually, you know, as opposed to like having a specific industry that's only surrounded with one model.
Speaker 2 Maybe there's some fine-tuning, some other, you know, techniques that they can do to make it better. Okay.
Speaker 2 Uh, so I have no like strong opinion whether that's the future or is it just ChatGPT gobbles them all up because it has a trillion, it has a monopoly amount of money on stuff.
Speaker 2 And so it's just like, I win everything because I have enough information.
Speaker 2 But at the end of the day, it's really about the people who use it.
Speaker 2 I don't want to just have ChatGPT tell me my problem.
Speaker 2 I'd much rather know what it is, go to a couple doctors and see what they have to say and then bounce off what I've been told and see like where it can be because you just never know. I've heard just.
Speaker 2 craziest stories about exactly these doctor things where one person figured it out and it was the most completely opposite thing that even chat GPT couldn't get. And so is there a lot of money in it?
Speaker 2 Probably a ton of money if you can really solve it.
Speaker 1 Okay, that's what I want to talk about then.
Speaker 4 I want to go even broader here, which is the money question.
Speaker 1 Isn't this great, you guys? I have so many
Speaker 1
minutes just talking about AI attacks. I love this.
I'm loving this. This is the Doug episode.
Yeah, this is fascinating. Okay.
Speaker 1 Here's the thing.
Speaker 1
I always have to restrain myself on the other ones. I can't just go off.
I'm not like I'm bouncing off you.
Speaker 3 Yes, dude. Yes.
Speaker 2 I also have a huge conspiracy theory about money, but we'll get there.
Speaker 1 I guess
Speaker 4 everything you're telling me is making me less confident in open AI specifically and more confident in the other ways that, more focused ways that
Speaker 4 AI may be used. LMs may be used.
Speaker 2
Okay. Maybe I don't speak well.
I don't know if you should do that.
Speaker 1
Okay. They're all good, but keep on going.
Okay, keep on going.
Speaker 4 My concern is that the amount of money that is now promised by OpenAI is so massive, and I'm not quite, the revenue isn't there, but it has to get there.
Speaker 4 I'm trying to figure out what your stance is on whether this will be a profitable venture.
Speaker 1 Actually, a bigger question. Figure that.
Speaker 1 All this will only work out.
Speaker 4 The amount of money is now so large if they get AGI. So, the real question I want to ask is where you stand on AGI, how close that is, what you're seeing with that.
Speaker 1 I'm so happy.
Speaker 1 And also, what do you define it as? Because everybody has their own fucking definition for what AGI even is.
Speaker 2
Okay, okay, okay. Okay.
So, first off, this is like my favorite thing in the universe. So, I want everyone to like, everyone that has ever listened to this podcast, understand one thing.
Speaker 2 When a company has AGI,
Speaker 2 you will not get it.
Speaker 2 Reason being is, would you let the world's greatest secret be used by the general public? No, you'd remake Google, you'd remake Netflix, you'd remake everything.
Speaker 2 You'd use it to create every single company on Earth and just run your own businesses and be the world dominator. Like, why would you have the greatest piece of technology and open it up?
Speaker 2 Open AI is not going to release AGI.
Speaker 1 They would never be like their name is OpenAI, though.
Speaker 1 I don't understand.
Speaker 2 Okay. I'm an atheist checkmate.
Speaker 1 Okay, no, but
Speaker 2
there's no way. Like, there's simply no way that that technology.
The reason why they keep hyping all these things up is because they're just not there. And like actual AGI.
Speaker 2 So AGI would be, they call it artificial general intelligence, meaning that it can just always solve with extreme precision for low amounts of cost.
Speaker 2 problems that have never been seen.
Speaker 2 It can just simply keep on making itself better and better and eventually get to like super intelligence, meaning it's so smart it's even beyond anything humans could possibly ever achieve in the next thousand years.
Speaker 2 It's like the greatest thing of all time, right?
Speaker 1 It can replace every human task.
Speaker 2 If you've read Brandon Sanderson, have you ever read any Brandon Sanderson?
Speaker 1 Yes. Okay.
Speaker 2 It'd be like a shard. We'd be creating one of the gods in
Speaker 2 his Cosmere universe where they can foresee the future.
Speaker 1 It's like a God.
Speaker 1
Thanks, thanks, Doug. Yeah.
Thanks.
Speaker 4 This guy was gibberish.
Speaker 1 You find that.
Speaker 3 I don't even know. The thing is, I don't even know what he's saying.
Speaker 1 And then you come in. Yeah.
Speaker 3 Doug, you're just a man of the people.
Speaker 1 You're going gonna see it
Speaker 1 um okay so yeah i i because i haven't chatted with you about this either so let's give quick context so agi is the thing people keep talking about as like we're gonna hit this and when we do that justifies the fact that we're spending hundreds of billions of dollars on all this stuff because then it will or trillions then it will unlock such unbelievable amount of productivity that we'll be able to fund everything we can do make every job a hundred times more effective etc etc literally talking about it like god like they're going to invent god yeah I mean, the way I saw it described was, you know, Sam Altman is basically going around saying, give me trillions of dollars.
Speaker 1 I will create God.
Speaker 4
And a lot of people have been doing that. And a lot of people have been promising.
He's been promising money to other people.
Speaker 1 And
Speaker 4 is God coming? Because Zuckerberg's talking about three years, you know.
Speaker 2
Karpathy says 10 years plus. A lot of people, I mean, this is literally just Gilgamesh all over again or Nimrod from the Bible.
This is just Tower of Babel.
Speaker 2 I'm going to create the greatest thing ever. We're going to be just like gods because we can create the universe, right?
Speaker 2 This is the, I mean, this has been, we've been saying this for thousands thousands of years. This is not the first god.
Speaker 1 It's not difficult, though.
Speaker 2 It's, I mean, maybe it is somewhat different in some kind of sense, but it is also oddly very similar.
Speaker 2
Right. So it's very, very funny that they're doing that.
But at the end of the day, if let's just say we do get there, like it is, you are not going to get the same level of access, right?
Speaker 2 They're not going to just be like, hey, here's the model. You go run it.
Speaker 1 No, you can't.
Speaker 2 You don't have the bajillions of GPUs.
Speaker 2 Yeah, you're not going to, there's going to be so much safety built in.
Speaker 2 Remember, safety and correctness are kind of in some sense at odds with each other because it has to like take what you say and be like, well, actually, we can't say all the things you're asking for because it might be dangerous.
Speaker 2 Right. Like there's this famous
Speaker 2 famous thing that happened with Gemini when it first came out, which is a 17-year-old on, it was like a 17 or 16 year old on their Google profile asked about some C β feature.
Speaker 2
And because C β is an unsafe language, it refused to show. this person the code because it's like, sorry, you're a minor.
I can't show you this code.
Speaker 2 And it's just like, it's a very, very funny post, but it actually was true. And so there's like, there will always be this kind of competition.
Speaker 2 You will never get the same access level as something that could just solve every single problem ever because you would just be able to make everything.
Speaker 2 But if everyone can make everything, then no one has really anything.
Speaker 1
I learned that lesson from the incredibles. Yeah.
When everybody's incredible.
Speaker 3 This,
Speaker 3 I'm glad you introduced this question or topic because I had almost the exact same question, but how this affects people working in the industry right now, because it feels like we're not necessarily close to that.
Speaker 3 And
Speaker 3 that is the reason that these companies are hyping up the work that they do because they need to keep the energy and the funding to get closer and closer to it.
Speaker 3 In the meantime, there's this kind of this feeling of the walls closing in, figure out how we can monetize this for the time being.
Speaker 3 You know, we've spent a lot of time recently talking about video generators
Speaker 3 and these
Speaker 3 new sort of social media TikTok-esque sites as like a maybe a means to monetizing the technology, right? And all these other potential, maybe
Speaker 3 and shidified ways of monetizing the experience in the short term. If you work on this stuff right now and you're somebody in the industry, do you feel this sort of
Speaker 3 financial pressure of the bubble at the moment?
Speaker 3 Like a lot of people, as an example, a lot of people talk about how Boeing changed a lot when the culture of executives came in with a very different outlook on how money was spent at the company rather than being run by engineers, right?
Speaker 3 And with how much
Speaker 3 pressure there is from the amount of money involved here, how does that affect the average programmer working on these things and how they feel about their work right now?
Speaker 2 So this is such an interesting topic.
Speaker 2 And this is the unfortunate, I'd call like the black pill side of things right now is what I'm seeing is a lot of people are taking on more responsibility, but they're not actually, they don't feel like they're learning because they're constantly just deferring to the AIs to try to make all these things.
Speaker 2
They're more just reading code instead of writing code. They're constantly trying to prompt their way into stuff.
And
Speaker 2
I kind of foresee this like stepping stone between now and AGI, if we will. I don't know if or when AGI will happen.
I make no predictions on that.
Speaker 2 But I see this kind of zone where everyone feels like they need to do more because we constantly keep saying that it's the greatest thing ever, but people don't feel as much success on this specific topic and they're not able to quite generate all the things that they keep getting promised.
Speaker 2 And so I just foresee a lot of burnout and tiredness because no, people aren't learning as much anymore. It's really hard to learn something if you're not like in the weeds doing the thing.
Speaker 2 You know, I can watch a thousand Belatro streams, but I will not be good at Belatro until I like learn all the jokers.
Speaker 2
I just have to do the work myself, even if it's fun. And so I kind of see a lot of this happening where.
There's a lot of obviously money going into this.
Speaker 2 It's hard to say if it's in a bubble or not because what is a bubble?
Speaker 2 The hard part about a bubble is that even if everything inflates in price and a whole bunch of money comes in but if it grows to that expectation over the course of the next five years was that a bubble or was that just the was that actually just the future
Speaker 2 yeah it's just sick as hell like it's the future it's the future value being realized now doug do you feel that there's uh that kind of pressure of the ever-increasing amount of work a demand to be using ai for all these things but a kind of lack of fulfillment in these things because that's where i truly think burnout exists is when you have the lack of learning and growth mixed with harder and harder deadlines and more and more work, and you just don't get that joy anymore.
Speaker 2 And you're just a yes man.
Speaker 1 Yeah, I so I've talked to programmer friends who are like good, good, good programmers, like lead tech folks at companies, and they're basically saying it's like you use AI every day, but they don't particularly like it.
Speaker 1 And they talk about it as a
Speaker 1 like having a team of junior engineers who don't learn. Like they're not, so it's not, so I think
Speaker 1 programming. Me and you.
Speaker 3 It's like having me, it's like chat GPT is like a bunch of me and you.
Speaker 1 Yeah, it's like you have a bunch of A and you. Is that useful or helpful?
Speaker 1
Not to say if you have enough of them, it could be. Throw enough of us at the problem.
I can give you a massage while you work through the code. That would be useful.
Speaker 1 And so I think there's like two angles.
Speaker 1 One is if you're the person who's trying to learn, in which case, yeah, it's weird if you, if the AI does all the groundwork that allows you to feel like you actually made a thing or learned skills and you're just telling something, somebody what to do.
Speaker 1 If you're only only ever a manager and you never get the low-level experience, that feels like you're losing a key moment of development and growth.
Speaker 1 And then when you get to the higher levels, and I'm sure you've talked to folks like this as well, who now feel like they do have the experience at the low level, but they're basically just directing a bunch of low-level employees, which are AIs, but they aren't getting better.
Speaker 1 They also don't have the satisfaction of their team leveling up because they aren't leveling up because it's just their AI is just doing the same thing.
Speaker 1 And the development only ever happens from you, the human, telling it what to do better. So it sounds at a professional level, not that fun, to be honest.
Speaker 1 And it doesn't sound like most people are stoked about it, but of course every company is
Speaker 1 trying to cram AI into absolutely everything.
Speaker 1 And then again, the caveat that's important to acknowledge is that for me, and part of the area I do think is going to love this, is in the, let's say, hobbyist intermediate role where I don't have to make stuff that I don't have to plan a city, right?
Speaker 1
I get to go make a funny like bounce house. Right.
And then I pop it at the end of the day and move on to the next thing the next week.
Speaker 4
So also for your use case, if it is a little bit jank, that is funnier. It's better.
Yeah, yeah.
Speaker 1 So I think about something, like one of the things I've thought about with ChatGP5 is, let's say you have a parent at home who wants to organize their kids' chores, right?
Speaker 1 And something that is, it's all this time of like managing your household.
Speaker 1 And you right now can go to ChatGPT and I know because I've done this and say, make me an app right now that can track, here are my kids, here's how they use it, make it into an iPad form that we can touch and put on the the kitchen counter, and it's going to track all these things.
Speaker 1
It'll have points, it'll have scores, it'll have game. You can add all these cool things.
And just some mom could just through normal English have this entire custom app made.
Speaker 1
And it doesn't matter if there's a bunch of bugs if it were to scale up. It doesn't matter if other people can use it perfectly.
It doesn't matter. No security.
Speaker 1
It doesn't matter if there's no security. It's just an average person who gets to use technology in a way that wasn't accessible to them before.
And that I think absolutely is happening.
Speaker 1 That's will continue to happen and will be a huge use case.
Speaker 1 And the question though is on the professional side does this help does it i mean it probably helps but i think people are unhappy with it is my read on it and again you probably have more direct this is the part that's surprising to me because
Speaker 3 from talking to people over the years
Speaker 3 i think this is the group of people that I would have expected to remain the most positive and optimistic.
Speaker 3 The people who still have jobs in the tech sector, working at companies, working on these sorts sorts of projects it seems like you'd be the most likely to have a very optimistic view of where this is going or you what you get to work on every day and to hear that it's a little more complex or a little more disjointed than I thought whereas three years ago I feel like everybody working on this stuff was super super pumped that's kind of shocking to me
Speaker 1 I would even say partially why I thought it'd be fun to talk to you about this is I think you are one of the, I think Prime is one of the most notable people not bought into the AI hype with programming specifically, who is not going every week and going, holy shit, the new Claude model is amazing.
Speaker 1 I just did this. And you get 90% of tech Twitter is that people
Speaker 1
is people going, the new model is crazy. Look at these graphs.
And then you're there making funny tweets like, it's been six months since it's supposed to have taken all of our jobs.
Speaker 1 And, you know, it's like, it's not that sick. It's not, it's not God.
Speaker 4 Okay, speaking of six months, I want to, I want to show this clip from Sam Altman six months ago.
Speaker 5 There's a lot of short-term stuff we could do that would like
Speaker 5 really like juice growth or revenue or whatever and be very misaligned with that long-term goal. And I'm proud of the company and how little we get distracted by that, but sometimes we do get tempted.
Speaker 3 Are there specific examples that come to mind?
Speaker 1 Any like decisions? This was six months ago.
Speaker 2 Oh, I'm so happy right now.
Speaker 2 He's thinking of erotica.
Speaker 1 We're waiting for a sex bot. Yes,
Speaker 1 exactly.
Speaker 4
So six months ago, he goes, our long-term vision is so good. We don't need to focus on this short-term juice revenue crap like a sex bot.
Six months later, he announces basically a sex bot.
Speaker 4 Erotica is ChatGPT. So I am making the case here that I think
Speaker 4 they increasingly look desperate for revenue. Not like they need revenue, they are desperate for it.
Speaker 4 Whether it's Sora, whether it's this, it looks like ChatGPT or OpenAI has made commitments.
Speaker 4 Like they're going to break ground on so many data centers in 2026 with Broadcom and Oracle and NVIDIA that they have to have money for, they don't have now.
Speaker 4 And they're getting more desperate for revenue. And I'm trying to, you know, ask people who know more than me: is there anywhere this revenue is going to come from?
Speaker 4 Or is it, are we headed towards a cliff here? Like, what is the because this looks to me like a complete about face.
Speaker 3 Why is this so upsetting to you? You have fragile Victorian ideals. We can't have a sex bot on ChatGPT.
Speaker 1
So it's his words. I have felt like when ChatGPT is giving me Python code, that I could, I'd prefer it to be more sexualized.
Yeah, yeah, yeah.
Speaker 1 I don't have an anime, anime chips, which I'm the upset about.
Speaker 1 He's tweeting every day. Yeah.
Speaker 4 Get coding advice from a hot anime girl.
Speaker 2 So, to be fair, like I'm gonna, I'm gonna steel man Sam there.
Speaker 1 Okay, okay. Please do.
Speaker 2
Uh, Sam hates Elon. That's been no, no surprise.
So, I'm pretty sure when that came out, Elon just got done releasing that.
Speaker 2 And so, that's probably more of a means to dunk on him and saying, oh, Elon's shallow, but we're like so good over here.
Speaker 2
More than about revenue and all that. But at the end of the day, they do need to make revenue.
He's even talked about it. That's why they're releasing these things.
Speaker 2 But you got to think about it maybe in a slightly different concept that they make a lot of money.
Speaker 2 They are making they're in the billions upon billions. They'll probably be crossing 10, 15 billion a year coming up to your next year.
Speaker 1 I mean, they make revenue, but they're losing money.
Speaker 2
They're losing money, but they're propped up, you know, Project Stargate or whatever it is. They're going to get a lot of money from the government.
It's like all growth stories ever.
Speaker 2
Startups are, or I'm calling this thing a startup, even though it's like six years old. Their goal is to maximize users.
It's not to maximize money. A great example of this is Docker.
Speaker 2 If you're familiar with Docker, Docker allows you to have whatever computer you're working on. You get a different kind of virtual environment that you could launch your little application in.
Speaker 2 So it's like, oh, I'm actually on Linux, I'm on Ubuntu on my Mac because this is what our servers are in, right? So you get to have that kind of experience.
Speaker 2 Docker, they had this product in which made $0 and cost a lot, a lot of money, millions upon millions of dollars.
Speaker 2 And they got so many people using it that the year they decided to make money, they made $500 million in like a month, right? It just was like, oh, I now make a lot of money, right?
Speaker 2 And so this is kind of them starting to turn on those gears of we've been acquiring users through a freemium model. And now we're going to start turning on gears to make money.
Speaker 2 And that's just how I look at it is they're just going to start turning on things to make more and more money.
Speaker 1 I don't look at it as a full failure, but 500 billion is a lot of dollars.
Speaker 2 And so are they going to be able to make that?
Speaker 1 Yeah,
Speaker 4 even revenue, even if they had no expenses at all, they're not close now in terms of
Speaker 1
what they got. I don't think it's that far off, though.
I think it's their spending
Speaker 2 $5 billion last year.
Speaker 1 Yeah, it's like $5 billion and then they're spending like $15 billion.
Speaker 4 So, I mean, it's, you know, operating expenses, something but they're losing hundreds of billions they're they're losing billions for sure every year but well they're promising hundreds of billions right now they haven't spent that yeah but they're yeah if they promise they say amd we're gonna give you a hundred billion dollars to build something they need to have the cash eventually they have to give them the money hey do you think amazon makes money they don't they do now they didn't for a while yeah you know amazon shopping still doesn't make money right but you know like you know like you can you can acquire users you can acquire a lot of stuff to be able to build other things that make money but in the dot-com bubble amazon lost 98% of his value.
Speaker 4 So like in a short term, if he doesn't make money, if pests.com doesn't make all these companies will go to zero.
Speaker 2 Yeah, Books a Million would be a good one.
Speaker 4 Yeah, I mean, I'm not saying eventually, of course, clearly AI is something new that is like going to be a big factor.
Speaker 4 But it just seems to me like that we've gotten way ahead of our skis in terms of what everybody is promising each other.
Speaker 4 And none of this is making anywhere close to the amount of money that it would, you know,
Speaker 2 I think a big thing people don't realize is that spending a bunch of money like these $200 a month, $500 a month coding assistant kind of programs you got to remember that most of the world that is a kind of a hefty bill to be able to have as a coding assistant because they don't make a lot of money coding right like if you're in the eu right now and you're just like a mid-level person you're making 50 60 000 a year like that's not a great amount of money to also be spending thousands of dollars on ai and so there's a whole world that exists in which is kind of priced out of this ai and i think we have a bit of a goofy mindset when it comes to this here in america comparatively to other places.
Speaker 2 And so I do think that there's going to be a lot of, it's a long-term revenue thing, but the government's going to keep giving the money.
Speaker 2 The government's going to help this project because I think it's more of a national need to have good AI
Speaker 2 because of just like geopolitics than it is because they're trying to make. And you know what?
Speaker 2 I'd rather have them say that than give me erotic text because it's just like, yeah, okay, I understand that argument much, much better than anime titties.
Speaker 2
Like when it comes to making money, it's like one makes way more sense. I get the purpose of this as opposed to the other.
But I don't think we're ever going to get that answer.
Speaker 2 It's going to be the the love of the game, and I'd never do that also anime today.
Speaker 1 Well, so I'm curious what you think because we never actually talked about this, but I think you did on your Eclipse channel.
Speaker 1 But funny quote from the CEO of Anthropic, which is basically saying, well, yeah, we were not profitable as a company, but when we trained our first AI, it cost, let's say, $10 million.
Speaker 1
And then it actually made us $15 million. So we did make a profit.
But by the time it had made $15 million, we had spent a billion dollars on the next model. So technically, we didn't make money.
Speaker 1
But then that model the following year made two billion dollars. But that following year we were spending $20 billion.
So it's a bit of a scheme, if you will.
Speaker 1 It's
Speaker 1 like a schematic.
Speaker 1 All right. So here, here's the question, right, that I posit.
Speaker 1 So if the idea is that every year, one of these AI companies, let's say OpenAI, makes the new model and they spend a shitload of money to make it.
Speaker 1 And then the following year, they can make a profit as long as they don't keep throwing money on training and making new stuff.
Speaker 1
In theory, theory, right now, OpenAI could stop making new AI models and could get close to making a profit next year or get a lot closer to it. Yes.
And so that feels like there's at least an
Speaker 1 exit strategy that doesn't ruin the entire thing.
Speaker 4 How can you ever stop?
Speaker 4 Because the second you stop, it seems like, and again, you guys are more technical than me, but it seems like whether it's DeepSeek or anyone else, people can catch up rather quickly.
Speaker 4 And then within... six months to a year, you have that model running locally.
Speaker 4 It's like, you know,
Speaker 1 it's like
Speaker 1 a profit per unit to that? Yeah, the foundational models, I think, are, are really unsteady financially for that exactly.
Speaker 4 I do think the government angle makes it like, you know, maybe it's just so nationally security important that we can throw collectively our tax dollars into this, whether or not, but I don't see like as a business how this
Speaker 4
makes sense in any direction. I don't, I don't, it doesn't math out to me.
I don't get it. But I think that that is a good point.
Speaker 2 There's a lot of ancillary companies or second fee, like second order companies that do make a lot of money. Cursor makes a lot of money from Chat ChatGPT and all the other models.
Speaker 2 Like there is a lot of money to be made. And at the end of the day, all these companies that are making money, a part of that money is also going to these parent models.
Speaker 2 So like all the derivative programs that are going to crop up over the next five years, which is going to be an enormous amount, are all going to be a lot of fun.
Speaker 1 That doctor thing we talked about. That doctor thing pops off.
Speaker 2 That is revenue going to these companies, right? And so when they say they're not making that much, you just got to remember, like, we haven't even begun the infiltration.
Speaker 2
People always do this thing where they're just like, oh, look at this new technology. Next year is going to be crazy.
It's like, no, society's moved slow.
Speaker 2 It'll be like 10, 15 years before everything's chat chippity. But when that does happen, then it's going to be like, okay, yeah, that's a lot of money they're bringing in.
Speaker 2 Like, that's going to be a lot, like tokens. Just everybody's going to get tokens.
Speaker 1 Oprah win-free tokens for everybody.
Speaker 4 Even gets tokens for allowance.
Speaker 2 Exactly.
Speaker 2
So I wouldn't worry. I don't worry as much about the profit side in that kind of sense.
Because even if Open AI goes down, the tech's there, infrastructure's there.
Speaker 2 Like something is going to take that place.
Speaker 1 Something is going to.
Speaker 4 Jeff Bayless made that argument recently, and I can see that, but it just sounds, you know,
Speaker 4 that part in the middle, the part where it's like, everything goes down, but we still have all these GPUs and we build something cool.
Speaker 4 That part in the middle is like everyone's 401k going to fucking 50%. You know, it's like
Speaker 4 economic problems that
Speaker 1 I feel like we're barreling towards. But okay,
Speaker 4 I think that's a fair argument. I wanted to ask, because you guys met with Newsom, and Newsome recently signed, I think is the first piece of AI regulation in America.
Speaker 3 SB 53.
Speaker 4 SB 53.
Speaker 4 And I wanted to know more about that and hear about what's going on because this is like, you know, California has done regulations in fields like EVs and other things that kind of led the nation because they've did it first.
Speaker 1 And if this is going to be something that's going to be broader adopted in America or what are the odds that you would talk about this? Well, as long as we are.
Speaker 1 So the first bit is SB 1047, which is last year there was a big AI bill that went all that was approved by the California legislature and then went to Governor Newsom and he vetoed it.
Speaker 1 It was very controversial because there was basically a lot of requirements that it would have put on AI developers.
Speaker 1 And given that California hosts almost all of the big AI developers and it would have meant anybody who's even trying to like use their business in California, essentially a California AI bill is a world AI bill.
Speaker 1
So it's it's much, much outside of China. Yes, outside of China.
It's much, much more consequential than just California.
Speaker 1 So this bill came out last year that went all the way to the governor to sign and it had things like, if you're making large AI models, you have to plan out safety and security. Boo.
Speaker 1 You have to have a kill switch. This is genuinely, I do not think, is a good idea, which is the idea that if something is deemed to be a problematic, you have the ability to shut it all down.
Speaker 1
That makes sense on paper, except that's not really how AI models work. You can distribute them to anybody on any computer.
And so that only works if you only ever keep your AI model.
Speaker 4 Abinus needs a big button on his desk that can stop the terminator.
Speaker 1 Yeah. Again, the only way that that would work in practice is every single person's computer shuts down.
Speaker 1
Right. If he has a button that turns off the electricity, he could do that.
And otherwise, you cannot do that.
Speaker 1 Statements of compliance, third-party audits from the government, a new agency that's overseeing anything, huge civil penalties if you go against all this.
Speaker 1 So Newsom said, while well-intentioned, it doesn't take into account whether an AI system is deployed in high-risk environments. To be frank, he said a lot of political stuff.
Speaker 1 I think the tech industry was just very, very upset.
Speaker 1 And the core criticism is you are putting in a ton of regulation a pun a ton of like sort of bureaucratic work that ai companies have to do a ton of penalties in case things go wrong this is too much for an industry that we really still have no idea what's going on everybody is trying to figure out what this is how to make it you want to leave the opportunity for a company like deep seek to come in and create something that's totally new and not have them be throttled by having to go through all of these layers of california bureaucracy so that was the criticism tech companies were very happy it got vetoed other people not so much so this year, there's a new one.
Speaker 1 And so when we were talking to our great friend, Gavin Newsom, he did mention this specifically.
Speaker 1 And somebody brought up, oh, fear around AI and how that's making people feel like just the world is a little more scary and unknowable and there's too much misinformation.
Speaker 1 And he said with a lot of pride how they just signed SB 53. This is about
Speaker 1
half a month ago, September 29th. So this is, again, it's applying to large models.
And basically what it does this time is you have to submit a report one time a year to the government.
Speaker 1 So it's chill as fuck now. There's also some other computing stuff, but the core thing for a company.
Speaker 2
I want to talk about the report for a quick second. Okay.
It's a lot of paperwork. It's going to be a very in-depth thing.
And if you don't, she's a adage. Yeah.
Speaker 2 But if you don't, you get to pay a million-dollar fine. So those multi-billion dollar companies are going to be like, brutal.
Speaker 2 Do I hire millions of dollars worth of staff to be able to create this port, or do I just simply take the million-dollar hidden?
Speaker 3 Patriot could pay that fine.
Speaker 2 That part I thought was very hilarious because what I see there is that this is an uncompetitive advantage for people that are in like this $500 million to a billion dollar range because that's a million dollars is a much larger amount of money to these smaller companies.
Speaker 2 And you have to hire, like it's going to be a pretty full-time multi-person role to produce very good documentation, like due to all the things that they have to do.
Speaker 2 It's going to, it doesn't feel like greatly competitive, that singular point, because it hurts big, small.
Speaker 4 Yeah, it hurts the small more than the big.
Speaker 1 Yeah, like the small
Speaker 2 zero to $500 million, they don't have to do the reporting, or there's some small amount of reporting they have to do oh the medium I guess.
Speaker 2 Yeah, yeah, so it's like the medium ones that a million dollar hit actually does mean something Yeah, they don't want that yeah, but it's it's just like it's kind of like it's a weird place to be like I understand the idea, but it kind of it's kind of a little bit emotional painful It's also there's no there's no enforcement There's nothing nothing happens.
Speaker 2 Oh, yeah, and the AG has to first go and and sue them so before they can do the million dollar thing like there has to be the AG being like,
Speaker 1 hey, that anthropic son of a bitch did not have my report my desk by Monday. I'm like, that's it, right?
Speaker 4 Like, they that's how AG's talking, California.
Speaker 1 Yeah.
Speaker 1 Famous California AG accent, right there. That damn son of a bitch.
Speaker 1 Y'all making it shot out there.
Speaker 1 You need to give us reports. Okay, I didn't think through the accent, okay?
Speaker 3
They have to evaluate. I thought it was interesting.
What a catastrophic risk, which is any outcome of a model update or an entirely new project that would be released.
Speaker 3 Anything that would result in the death or serious injury of more than 50 people.
Speaker 1 And they have to
Speaker 4 say, my new update is going to kill 49 people and lose us 900 million.
Speaker 1
That's chill. That's chill.
No report on that one. That's chill.
And let's just look at how much a human life is worth according to that. It is about $20 million, a human life.
Speaker 2 I know, it's kind of, when I read that, I was just like, oh, that's kind of funny. It has to be 50 or more.
Speaker 3 I thought the one standout, really clear-cut, good thing to me seemed like the whistleblower protections,
Speaker 3 which is that someone can come out and say, hey, this company I'm working for is doing these things that are potentially going to damage a lot of people.
Speaker 3 And they don't have to deal with the fear of retaliation for coming out with that information. The rest of it struck me as,
Speaker 3 well, I could just... evaluate and spend a bunch of time saying that even if I'm not willing to pay the fine, right? I could make the the documentation argument for this isn't that risky.
Speaker 3 Like it's not that big of a deal.
Speaker 1
Or you're like, yeah, we had a team go through and red team this and we think it's really safe. And they, and to be fair, like they do.
The companies do actually do this to varying degrees.
Speaker 1 Anthropic is great at this and they do a ton of work. They put a ton of effort into just trying to figure out how to make these things safer and understand them.
Speaker 1 I mean, earlier what we talked about, the whole thing about finding that you can poison pill it, like, that's not to their advantage to tell people about that or to research it.
Speaker 1 It's interesting that they did that research and published it when it's no, they have a great track record of safety, and that's like a big thing for them.
Speaker 2 And they're unsure if it's going to affect Frontier models because maybe there is like a scale that's like so big you can't do 250 documents.
Speaker 1 They're not sure if they're not going to scale the slurs if there's 50 million people, yeah, yeah, yeah.
Speaker 1 There might be a amount of people, it could overpower these machines with this in mind.
Speaker 3 With this, I'm not going to overpower the machines, you might. With this in mind, in contrast to what the reaction was like to the previous bill that was vetoed,
Speaker 3 what is the tech world reaction to this right now that you guys understand? Because just from our evaluation, even reading the differences between these two things, right?
Speaker 3 This seems a lot more agreeable or looser. But at the same time, I'm sure to some people, any amount of regulation is going to be too much.
Speaker 1
Yeah. Yeah.
It essentially that. This is, so just to be clear,
Speaker 1 what actually got signed and California has a legitimate AI legislative regulation bill now is super mellow.
Speaker 1 It's like you have to report to the government what you're doing, how to hopefully make it safer. You have to report if there's a horrible accident and you have to protect whistleblowers.
Speaker 1 It is something. It's better than nothing.
Speaker 3 But is it mellow enough that the tech executives are like, this is chill?
Speaker 1
So, so most are yes. Most still are like, this doesn't feel great.
And then there's the concerns from you about medium-sized companies being like, come on, like, what is this even doing?
Speaker 1 This, you, you've got like a good spirit here. It, it,
Speaker 1 to me, and the sentiment I read, it like checks off enough boxes to be like, We did make regulation.
Speaker 1 And for, frankly, Gavin Newsom in this room to say, We talked with Fei Fei Lee and we made this thing that teeters on the line. And he does a lot of the hand motion.
Speaker 1 He literally says, You know, he's like, He did do teetering on the line. He said, Teetering on the line, and he said, This is this, this bill is really a collaboration between all the companies.
Speaker 1 I would say it's a win for the AI companies while being enough that technically there's something, and it's, it's not nothing, it's just uh, there's just much less.
Speaker 4 And so, how do you compare with the you said with the VC?
Speaker 1 So, there are some people people who are really critical.
Speaker 2 I'll come in with the opposite take. I think it's just a
Speaker 2 governmental masturbation in the sense that, A, let's say I am a company and I produce a report. Who's verifying these?
Speaker 2 The government masturbation. Who's like verifying?
Speaker 1 So I'm trying to do the hand gesture. So who's the expert?
Speaker 2 Who actually validates that these reports are real? Who's going to prosecute the things that are faking? Who knows what the things are lying?
Speaker 2 Like, Enron exists, and we created something called XBRL, which prevents companies at least largely lying through, what, 2007, 2008 to do all this.
Speaker 2 Like, there's nothing here that actually puts teeth into it that's going to be effective enough for big companies I know we don't have really time to talk about the whole anthropic being sued thing but they got they they took the world's books and got charged 1.6 billion dollars and they're like cool by the way like so it's not a bad price
Speaker 2 even the world's books so they could they can or stealing every book in the world i mean i would make that deal they can close that deal they can skip the next 1,600 reports for the same cost.
Speaker 4 Yeah.
Speaker 3 Like that.
Speaker 2 So they could kill 50 people a lot of times.
Speaker 1 And then, like, I didn't file a report either. My bad.
Speaker 2 Right.
Speaker 2 Like, and no one would, like, the government, they'll get civil lawsuits from the people, but you know, the non-dead part of the people, but you know, but nonetheless, it's like, okay, so there's no teeth here.
Speaker 2 This just feels like regulation for the sake of regulation because there's a loud contingent of people, which rightly so, I think, do not like AI.
Speaker 1 Right.
Speaker 2 And once again, but they, or when I say rightly so, I mean they're in the right ballpark, though they're angry for reasons I think they're mostly found out of ignorance and all this kind of stuff, but they're still angry about it.
Speaker 2 And it's just like, okay, you're trying to appease a crowd, but you're not providing any sort of teeth or real regulation. You're just like, safety reports.
Speaker 1 $1 million. And you're just like, but what the hell is the point?
Speaker 2 The one thing you brought up, though, I love that part. Like, we should definitely have whistleblowers and we should make that super awesome.
Speaker 1 Yeah, every company has to have a way for people to anonymously report. Like, they actually have to
Speaker 1 have an ability for a whistleblower.
Speaker 2 Because I want to know if like Sam Altman is going to make like the biggest animes out there.
Speaker 1
Like, we need to have someone reporting that immediately. Like, that's important.
We need somebody to get ahead of that.
Speaker 3 I need that to be leaked.
Speaker 4 How does that compare with Europe and China? Like,
Speaker 1 what are they doing the regulations?
Speaker 4 As far as I know, the rest of America, none. California, this bill, which is very light.
Speaker 1 Europe and China.
Speaker 2
So, can I, can I, can I just, can I just jump in here really quickly? Just, it's very, very important. Well, first off, uh, when we say AI, we're talking about LLMs.
Okay, this is very important.
Speaker 2 And so, obviously, there's uh, there's LLMs from the minstrel reason
Speaker 2 region of France, and then there's just sparkling LLMs everywhere else. Okay, well, you can actually see the real difference is because
Speaker 2 what is Europe producing?
Speaker 1 Minstrel.
Speaker 2 Like that's their only one.
Speaker 2
What is China producing? Lots. What is America producing? Lots.
You can see the effects of regulation.
Speaker 1
Like it's if you just use your eyeballs and look at company chart and you can just watch it happen. Yeah.
So there's four main bodies I think that are relevant. There's California and then the U.S.
Speaker 1 federal government, which weirdly are kind of different. Again, California, if they make really strict regulation, would in some ways override the federal one in weird ways.
Speaker 1 But our federal government right now under David Sachs and Trump are trying to do as little regulation as possible, essentially zero. California now has this bill that we just talked about.
Speaker 1
Something, it's very minimal. The two other big players, Europe and China.
China actually does have some regulation, but it's almost entirely around generative AI.
Speaker 1
So it's basically just like, if you generate... AI images, video, text, whatever, it has to be labeled.
They also have to, shocker, submit their algorithms to the government for review.
Speaker 1 So it's like they need to be clear with users when stuff is AI generated and the government gets to come in and look at it. Not a shocker.
Speaker 4 That's really good. Yeah.
Speaker 1 You can't make that Xi Jinping. I'm actually on Team Mario.
Speaker 2 I hate to say I'm on Team China here for a second, but I like that labeling.
Speaker 1 Yeah, no,
Speaker 4 so that rule specifically is what a lot of people are clamoring for because everyone's scrubbing the watermark off of Sora videos and claiming someone committed a crime.
Speaker 4 You know, it's like, there's like, there's consequences without, if it's labeled, it's just, it's goofy. But if it's not labeled, it's a problem.
Speaker 1 Correct. Yeah.
Speaker 2 So that's good then I want to see Gavin Newsom and Trump dancing yeah
Speaker 1 I hate wondering if that's real
Speaker 1 I keep thinking they're best friends and I keep getting duped every time so and then there's the EU which is the polar fucking opposite and as much as it's easy I think you know certainly for folks in our audience who are like well of course there should be regulation on AI it's too dangerous and whatnot again the real challenge is if you believe this is going to be really impactful for a lot of industries and make a huge difference if you make it obscenely difficult to build a a thing or to experiment or research and try new stuff and have all of these rules, all this bureaucracy, all these punishments financially, then people aren't going to make stuff in your area.
Speaker 1
And that is what has happened with EU. So they enabled the or signed the AI Act in 2024.
It's currently being rolled out. So companies like this year are starting to follow along with it.
Speaker 1
What it quickly does, breaks AI companies into three tiers. One is these are prohibited.
These cannot be in the EU.
Speaker 1 It's basically anything that does like dystopian social credit system type stuff or allows authorities to in real time
Speaker 1
do biometric identification of people. So that stuff is just banned.
But then there's high-risk systems and general purpose AI.
Speaker 1 The definition of high-risk is confusing, which is part of the criticism.
Speaker 1 It's like critical infrastructure, education, employment or business, essential public and private services, law enforcement, migration control, and justice or democracy.
Speaker 3 Oh, everything. That could mean fucking everything.
Speaker 1
Yeah, that's everything. Yeah.
And so for that category, you have to establish and maintain a risk management system.
Speaker 1 You have to validate and test all of your data and make all of that public with the government, do tons of documentation and record keeping, transparency to all users about everything going on.
Speaker 1 There has to be humans that come in and audit all of your stuff. And there are massive fines for not doing this, including up to 7% of your global revenue gets fined if you don't do this.
Speaker 1 And this isn't just companies who are headquartered in the EU. This is like OpenAI, ChatGPT.
Speaker 1
If they have ChatGPT running in the EU and they break these rules, the EU can go to them and say, we are fining you 7% of your year's profit. Not your profit, revenue.
Yeah, revenue.
Speaker 1
If you have a profit, they'd be free. Yeah.
Oh, you owe us money.
Speaker 1 That's a negative number, but
Speaker 1 the last category is general purpose AI models, basically everything else. There's still a lot of documentation, public summaries of what's going on.
Speaker 1 These laws also have a lot of definition around copyright law. And this, I think, is good, very clearly defining that you can't train data on stuff you don't have access to.
Speaker 1 If it's behind behind a subscription or a paywall, then you can't use it. And you're still on the hook if you end up taking a bunch of data from websites where those websites stole it, right?
Speaker 1 So if there's,
Speaker 1 yes, this is the anthropic thing. If there's a company out there that stole every book and they have 500,000 books and I go and they tell me, oh yeah, these are all chill.
Speaker 1 And then I go there and I make an AI model off of it. If If it's clear that that website had illegal books, I'm now on the hook and the EU can charge me 7%.
Speaker 1 So it's not clear always.
Speaker 4 What's weird is there's just no way that not every single AI company is in violation of those rules.
Speaker 1
No, they are. They are.
Absolutely.
Speaker 4 And there's no way they're going to be able to actually charge 7% of revenue on every major company. It won't.
Speaker 2 Yeah. Well, I'm more worried that first, I got to say this again before the comments just explode, this whole YouTuber problem is that
Speaker 2
the EU loves regulation when it comes to the internet, right? Like this is very well known. And there are some good ones.
Like I love right to forget.
Speaker 2
That's a beautiful thing that I can go to any company and say, hey, you have to delete my data. Like I love that.
I think that we should all have that.
Speaker 2
But this is going to cause like, I can foresee one day that there's like the EU black zone. It's just like in here, there's no AI allowed.
We don't do AI.
Speaker 2
All AI companies just bail out and say, sorry, we know we stole and you are going to sue us. Therefore, you don't get it.
And like, what is that going to do to the average person there?
Speaker 2 And so that's like my more bigger worry is these kind of
Speaker 1 companies.
Speaker 4 I'm moving to Europe off that. That sounds great.
Speaker 4 But I do want to say, you know, based on the positive outlook here, if this is the next industrial revolution or something, that would put Europe in such a backwards position.
Speaker 1
It'd be tough. It would be tough.
So a couple of notable quotes here. So one is from Emmanuel Macron, president of France.
Speaker 1 We can decide to regulate much faster and much stronger than our major competitors, but we will regulate things that we no longer produce or invent. This is never a good idea.
Speaker 1 There was one from a Berlin AI agency that's like 60% of what we're building using AI for clients never leave because
Speaker 1 the potential clients are like, we don't know what's going to be allowed under this AI act. We're just not even going to risk using AI, let alone making AI.
Speaker 1 And then Sam Altman from OpenAI says, we're going to try to comply.
Speaker 1 We have a lot of criticisms on the way the act is currently worded. And then what's funny is, I didn't know this, the EU also then has codes they released, which is like a pregame for the law.
Speaker 1 So they released this set of codes that they want all the companies to sign.
Speaker 1 This is in a few months ago, that they're like, hey, in preparation for our big AI act being enacted soon, we want you to also sign this code, which says you're going to follow all these other rules, which apparently are not even the same as the original AI act.
Speaker 1 And so, Meta, Facebook, just said, no, they said, we're not signing your thing. And then they might just not be allowed in the EU to do their AI stuff
Speaker 1 in a few months when this starts out. So there's a lot of companies.
Speaker 4 They lose vibes.
Speaker 1 Yeah, they lose vibes. But it's going to go so bad.
Speaker 4 I can't get on threads and vibes.
Speaker 1 But yeah, there's a bunch of companies that are basically like, this feels so excessive.
Speaker 1 And again, it's not just about the concept of regulation. It's also the implementation.
Speaker 1 The actual wording, because I looked through a bunch of it, is very broad. And a lot of people are like, we don't even know what we're supposed to do here.
Speaker 1
There are so many different regulations and steps you're talking about, Europeans, that it's just like prohibitively expensive. And people are just not, there just isn't AI there.
There's minstrel.
Speaker 1 That's
Speaker 1
chat. That's minstrel.
They have the chat.
Speaker 1 And so it's like there is real genuine harm to this strategy of like, well, let's just really over-aggressively account for everything possible in this very broad way is that people are just going to leave because they don't want to risk this.
Speaker 1 Yeah.
Speaker 2 I'm on your team.
Speaker 1
Yeah. I mean, it's, you know, ideally somewhere in the middle.
Like, yeah.
Speaker 1 I think most people fall in that camp, which is that there should be some kind of regulation, probably more thorough than what California just passed and probably less thorough than what the EU is doing.
Speaker 1 Yeah.
Speaker 3 I mean, everybody's making the guess with uncertainty in mind, right? Nobody knows what this actually translates into.
Speaker 3 Like, how much value and meaning does this have to the average person 10 years from now, 20 years from now, 30 years from now?
Speaker 3 And we're all going to make our best guess as a country or as a society of laying the foundation for the direction we want to go in. And we won't know who's really right until it's
Speaker 3 really 20, 30 years from now.
Speaker 1 Wild to live through this, you know?
Speaker 4 The launch of Chat GPT into now is such a wild, interesting time.
Speaker 1 And we're all figuring it out live. And it, it, yeah.
Speaker 3 I had the exact thought this morning, which was, oh, this is, this this is a similar transition to something like the Industrial Revolution. Or if you were just looking at something basic like
Speaker 3 the automobile or your country deciding to invest really heavily into train infrastructure instead of building a bunch of car infrastructure, right?
Speaker 3 How those gambles pay out and translate into life impacts for the average person takes 10, 20, you know, it could be 100 years. And also these things cycle too, right?
Speaker 3 Because there's periods of time where certain countries that were ahead of the game leaned really, really heavily into
Speaker 3 automobile production and making a more car-based society like the U.S., right? We leaned into that really heavily to maybe some benefit for a while.
Speaker 3 And then other countries come around and then build out a bunch of public transportation infrastructure and build their cities in different ways decades later that actually benefits them in the long run.
Speaker 3 There's some wave to ride here in each place that you happen to be in.
Speaker 3 And this is the historic technological moment of our time that we'll see play out by the time, presumably by the time we die, we'll have an idea of
Speaker 1 how good or how bad it was. Brian Johnson over here.
Speaker 2 So I'm going to go with the opposite take of you at the very end. I thought I was on your team, but I realize I'm not on your team.
Speaker 2 I actually am for significantly less safety in AI models, like massively less, almost zero.
Speaker 1 Here's the reason why.
Speaker 1 Hold on. You mean that they are less safe?
Speaker 2
No, like the safety regulation around it. Like I actually think there should be almost no regulation.
Like I hate the copyright side of things.
Speaker 2 When I say safety, I mean like controlling of output, auditing, these reports, all that.
Speaker 3 You think I should be able to get all the images of Mario smoking weed I want.
Speaker 1 So, well, let me let me throw it this way.
Speaker 2
Well, that was the copyright thing. I'm actually not too sure about.
I'm still in the process of really coming to a conclusion of how I think about the copyright side of things.
Speaker 2 I think there will be more damage done to people distributively about replacing their friendships and their close relationships with ChatGPT, like both emotionally and in their life, than being allowed to build a pipe bomb based on things you found on chat GPT.
Speaker 2 Like I don't think those things are going to harm many people.
Speaker 1 I totally agree with you.
Speaker 2 Like I think the safety we're approaching and these regulations that these internal companies are doing are actually meaningless comparatively to the damage they're doing to people psychologically.
Speaker 2 Like that's more much more things I'm worried about.
Speaker 1 And those are like, oh, don't, God, don't worry about that.
Speaker 2 Get a friend, friend.com. You can just
Speaker 2 wear it. And that's going to like kill people.
Speaker 1 Like that's going to kill people. 100%.
Speaker 4 Fully legal, financially incentivized stuff is going to do way, way more harm than,
Speaker 1 yeah.
Speaker 2 I love how they're like, dude, we can't let let them know how to build a nuclear reactor.
Speaker 1 I'm like, brother, like, that's not
Speaker 1 a thing, man.
Speaker 1 I'm enriched uranium, dude.
Speaker 3 I'm more worried about the lonely 16-year-old who can only talk to Chad GBT, bro.
Speaker 2 You know what he's, yeah, like, that's going to cause people to interact. Like, speaking of this weekend and Twitch, like, how many women felt unsafe? Like, this isn't going to help that category.
Speaker 2 If there's a category, this does not help that category. People are going to be more isolated, more awkward around people.
Speaker 2 And due to your wack, which by the way, we had no wacky segments in this, and I'm very upset about that. I'm sorry.
Speaker 2 But in one of your wacky segments, you had an official eye monitor person that would have to make sure that people are doing enough eye monitoring.
Speaker 1 We like, that's what we need. If you need more eye contact with people, more personal interaction.
Speaker 3 That's the regulation.
Speaker 1 That's the regulation we need.
Speaker 3
Okay. One thing I wanted to touch on is data centers specifically, because it's something that is talked about all the time.
Like these companies are building massive data centers.
Speaker 3 This is something that they have to invest a ton of money into. This is where so much of the focus and is and the energy required.
Speaker 3 Can you explain, as somebody who worked at Netflix, like I imagine companies like Netflix, companies like YouTube that have existed before this have massive centers of servers and ways to distribute, you know, the video that I consume all the time, right?
Speaker 1 And
Speaker 3 there is clearly a difference in scale here in what what these companies require.
Speaker 3 Can you explain the difference to a normal person like me of why these things demand so much more power, so much more space, and
Speaker 3 what is actually happening in these facilities that's different than the big places that are providing me
Speaker 3 like Netflix video?
Speaker 2
Yeah. So they're two really different operations.
So to kind of really understand this and understand the request response model, HTTP. So right now on the screen is still the Paul Graham tweet.
Speaker 2 How I got that is that I went on my computer, went to an address, and I made a request that gets sent across the wire, that goes to some sort of server, and I logged in.
Speaker 2 You know, they get all the information out, they go and check a database, they get all this information, then at the end of the day, they return out some amount of HTML, text, JSON, some data back for you to be able to look at it and peruse that data.
Speaker 2
And that requires a CPU. Everyone's familiar with the CPU, that just simply processes instructions one at a time.
Now, the amount of power it takes to run one of those machines is still a lot.
Speaker 2 Like it's still like if you were to compare you to riding a bicycle, it's, it's, you know, you can go pretty far on it, I'm sure.
Speaker 2 Like a day's worth of serving Netflix on a single machine is like you biking a huge amount of distance. So power is kind of goofy to think about in that kind of sense, but it takes power.
Speaker 2
But AI does not run on a CPU. CPUs, what they're really good at is they're like the Hussein Bolt.
They're like just, oh, I'm so fast. But if you had to ask Hussein Bolt to say,
Speaker 2 move a million stones, he would be slow compared to a million people, right?
Speaker 2 A million people would be able to just crush Hussein Bolt every single time because they can just move so much faster together. And so that's kind of more like a GPU.
Speaker 2 GPUs use a lot more kind of power. You have these machines that are just all GPUs.
Speaker 2 They generate a lot of heat and they use a lot of power to kind of process all these instructions because what you're doing is doing flops, floating point operations, a lot of just mathematics.
Speaker 2 It's doing linear algebra across gigantic amounts of data, millions upon millions of operations for you to be able to produce wholesome photos that make your heart happy and all that, right?
Speaker 2
And that's what's happening on Grok currently right now. And so that's like, sure.
That's what's happening.
Speaker 2 And so that just requires much, much more because for me to say, no, you're logged in is going to be a search on a database, which is going to be a technical terms now, a logarithmic search probably across data, if not
Speaker 2 like a constant search, a very fast, just single value lookup. And that can go, okay, hey, this person's logged in versus generating something is going to require a huge amount of operations.
Speaker 1 Doing the Gavin Newsom sounds like a huge amount of them.
Speaker 2 And like, that's why this is so much more power intensive. It's because one request makes a lot of operations.
Speaker 2 Like even the most crappy written software, which by the way, modern software is really crappily written.
Speaker 2 Lots, like tremendous waste in computing cycles is just like not the same order as it is in computing a bunch of linear operations. It's just, it's just like, they're just vastly different.
Speaker 2 And so that's why they tend to have this much larger amount of power consumption. And we're only just beginning right now with AI.
Speaker 2 So all the figures I found, about 20% of data center power and everything goes to AI.
Speaker 2 So it's not a huge amount of the data centers and all that, but still that's a big commanding amount considering how long it's been going on. So by 2030, is that gonna be 10 times more?
Speaker 2
Will be 98% of all data centers will be that. And then comparatively, the world is actually still pretty small.
Comparatively, maybe.
Speaker 2 I don't even know if those numbers are to be trusted, the ones that I found, but I've just asked Grok, ChatGPT, and found some papers on it.
Speaker 2 And so I'm just like, I think this is right, or Grok's telling me the paper that's lying to me. I don't, you know, I never know the actual answer.
Speaker 3 This might be silly, but
Speaker 3 who, like, in the actual facilities, who is there? Is it engineers? Is it
Speaker 3 like, is it programmers? Is it people that are just making sure that the
Speaker 3 GPUs are cold enough? Like, who, who is in this giant facility with all these GPU racks?
Speaker 2 Have you seen Silicon Valley? Yeah. You know, that one guy that was,
Speaker 2
who's in the place. No, they're like sysadmins.
If that's that's probably not the right term, but there's a there's gonna be a lot of different people in these kind of places. Yeah.
Speaker 2 There's gonna be a whole like ranging from every single job from security all the way up to somebody, you know, a logician going in there and making sure that everyone's having the right logistics and things are being transferred.
Speaker 2 Because as these things run, computers just break. So there's there's people that are, you know, root causing, finding where things are.
Speaker 2 There's the physical movement of going to a place, pulling out a rack, changing out the parts, putting something back in to all the identification systems being built that are probably being built, maybe in the Silicon Valley for a data center that's running in Godforsaken, Ohio, right?
Speaker 2 Like, there's like all these kinds of things that's by the way, that's U.S. Middle East one.
Speaker 1 Hey, there's two people in the chat,
Speaker 1 dude. That's a funny joke right now.
Speaker 2 That's a good joke. It's the only one running today.
Speaker 2 But there's like, that's happening. So it's like, who's running this thing? It would be such a complex question to actually answer because there's so much different things going into it.
Speaker 2 Because I guarantee you, there are millions of lines of software that have been written for Amazon, U.S., you know, U.S. East or U.S.
Speaker 2 West to run the way it does because they have, you know, it has to be such intense monitoring, such intense all this stuff. There's people overseeing it.
Speaker 1 So there's no simple question there.
Speaker 2
Yeah. And that's all conjecture.
I'm just guessing based off my software experience because I've never ran a data center.
Speaker 1 Yep.
Speaker 2 I've ran a, I used a crypto mine in 2013, so I know what it takes to run a few GPUs, my friend.
Speaker 2 But by the way, I sold them all for like $100 each.
Speaker 1 Still, I'm in a little bit of pain today.
Speaker 1 Also have 12 looked in the wallet, 12 thinking about it.
Speaker 4 Gotta move on. Getting to CSGO knives like this guy.
Speaker 1 Yeah.
Speaker 2 I was thinking about picking up a leisurely activity to help me forget like League of Legends, to help me forget the pain of crypto. League of Legends.
Speaker 1 No, that was.
Speaker 3
I mean, it'll have replaced your pain. Yeah.
A different type.
Speaker 1 Yeah.
Speaker 2
But I did, and we should talk about the water because this is super hot right now. Yeah.
Just got brought up.
Speaker 2 I know we preceded this one, but we said this is super, super important, which is, I don't know, have you guys heard of the whole water argument going on right now?
Speaker 4 People say if I do a Google search, 10 gallons of water are gone.
Speaker 2 Yeah, it's just like all this water is going on. Let me put it into perspective.
Speaker 4 I Sergei Bryn drinks it over at Google.
Speaker 1 Yeah, he's waterlogged. I mean, that guy drained his wet at this point.
Speaker 2
So, every time, like, the whole argument is right now, AI is destroying all the water. And that's kind of like a big thing people are hearing.
And you will see this.
Speaker 2 BBC just made some big thing about like in Scotland, they have all this water problems because of AI and all this. But it turns out pretty much that is just more fear-mongering going on right now.
Speaker 2 The amount that the total global AI usage is, is the same amount of water usage as 2008 Google. So not very much.
Speaker 2 Effectively, in the United States, it's like eight towns of 16,000 people worth of water.
Speaker 2
In other words, one golf course uses more water than like globally all the AI. Interesting.
So in the United States, it's 8%. It's
Speaker 2 7 through 9%. I can't remember the exact number.
Speaker 2
Let's just go for 7, for example. 7% of water is used on golf courses.
It's like 0.03%
Speaker 2 is used on
Speaker 2 data centers data centers not even just ai just data centers so do those two numbers you're like ah so if we used a thousand percent or a thousand x more we'd be as bad as golf so if we stop using chat gpt we could have even more pristine golf
Speaker 1 we'd be able to make we haven't framed the threat properly actually it's if we keep expanding ai we'll have less luck
Speaker 1 on that on the back nine which i can't hatch which on the back nine because the back nine is more important than the front nine.
Speaker 3 It's kind of setting the tone for it.
Speaker 4 I'm not sure if I have my nice times. Yeah.
Speaker 1 Jesus Christ, I don't know how much water golf uses.
Speaker 3 Dude, it's so much. It's
Speaker 1 insane. That and like, yeah, like having a hamburger is
Speaker 3
or just people's lawns in general. Lawns.
Any type of
Speaker 1 welfare.
Speaker 3 Okay, come on, guys.
Speaker 2 Let's like, let's all be friends here.
Speaker 1 Okay.
Speaker 4 Yeah.
Speaker 1 Just burn the American flag. Why don't you?
Speaker 1 Damn, dad. We burn golf courses.
Speaker 2
I thought we were friends. No, but I wanted to bring that up because while we were in that meeting, someone brought up like, oh, and the water usage.
And so I was like, and the water usage.
Speaker 2
I never, like, I've heard of that. I always thought it was a lot of water.
So I started reading papers and I was like, oh, we're just wrong.
Speaker 1
The water usage is not the problem. They're fear-mongering with the water usage by comparing it to what a person uses at home.
Yeah.
Speaker 1
And like, if you compare it to that, then yes, it sounds like a lot of water. You compare it to any other industry.
It's it is not the current concern.
Speaker 1
And like, I think energy broadly is more concerning. Yeah.
Water is not. Yeah.
Speaker 3 One, so
Speaker 3 some of these questions already are kind of fusions of people's suggestions from our fans and community.
Speaker 3 And a really, probably the most recurring question I saw was, hi, I'm a young 20s or I'm going into college. I am a programmer of some kind.
Speaker 3 With how I feel about the industry right now and my trajectory for the future, this industry's trajectory in general, should I change my career path right now?
Speaker 3 Should I have a sense of optimism of where
Speaker 3 or how I can work in this space right now because my options feel so limited? So can you speak to that mid to low 20s person who's struggling with that right now?
Speaker 2 I have a lot to say on it, but Doug, I want to hear yours first.
Speaker 1 It is interesting. And personally, why I wanted to chat about this is because at this Newsom thing, genuinely, there's maybe 12 streamers there.
Speaker 1 And at one point, people are saying how another streamer brought up the general sense of sort of pessimism and worry about the state of the world. And Gavin just said, Is that real?
Speaker 1
Not to be put and not to put you down. I'm genuinely asking that sense of doom.
Like, do you all feel that? And everybody kind of nodded and said, Yeah, things just feel worse nowadays.
Speaker 1
Things just feel on edge. Feels like we're moving towards civil war.
And then Prime was like, Nah, I think it's great. And
Speaker 1 then I was like, Yeah, I'm kind of with him. And so
Speaker 1 I am optimistic. I mean, doing this show and fucking, you know, drinking up fire hose of horrible news every week,
Speaker 1 it does make things a little bit worse.
Speaker 1 Me, I know.
Speaker 3 doug doug doug here's the sudan civil war you don't have to pay attention
Speaker 3 500 000 children have died of famine doug it's
Speaker 2 how do you feel now dogs
Speaker 3 i mean i i mean even i i understand that the broader context uh of people feeling the doom but i i so right where is no no no to specifically get to your question i it's just it was interesting in that context being like damn i doesn't feel like there's that much optimism right now.
Speaker 1 For me, the reason I'm optimistic is because I think that somebody who is driven and or creative has more tools to enable their output in the world and to do something than ever before.
Speaker 1 And I compare that to myself and my friends 20 years ago to now.
Speaker 1 And I see people who are able to go make things and not just because of AI, but just all the tools that are available, what the internet has, what YouTube offers with education, what ChatGPT can do to the average person.
Speaker 1 And I say this for random people. My sister who's a nurse, finding that tech is like making her life way better.
Speaker 1 My brother-in-law who's fixing his truck now because he can point ChatGPT at a thing who says, Oh, well, that's the blah, blah, blah piece, and you can do this.
Speaker 1 My cousin, who's now able to start a band, even though he's an accountant because these online tools have enabled him to work way faster.
Speaker 1 Me, who's able to learn all this programming at a rate and start to expand my creative repertoire, which has allowed me to hire more people than I would have otherwise and have more output and contribute more to society.
Speaker 1 So, I think that the way society is moving is going to allow, at least for people who want to put themselves out there, have far more ability to do so and to make an impact and to learn to learn to keep going to try to make a lot of mistakes and that is very inspiring to me and i think somebody who is creative and driven now has a much much
Speaker 1 there's so much potential oh my god compared to what it was 20 years ago and i think that will keep accelerating that i find optimistic
Speaker 1 what do you think
Speaker 1 and there's also a sense which we talked about which is that With my type of content, people keep coming up to me and saying, you inspired me to get into programming and I just got a job here.
Speaker 1
You inspired me to check out content creation. I was able to pull it off.
And I think like, I, I feel incredibly gratified getting that response. And I keep hearing that from people over and over.
Speaker 1
I see every single week and I get emails or talk to people in person who say. this did make a difference.
I was able to do a thing. I have changed my life in some way.
Speaker 1 And so I'm like, I know it's possible. Not for every single person, caveat, comments, commenters, I'm sure, are writing things, but that is my, that is my broad attitude.
Speaker 1 And even with all the things we talk about, I'm like, there's, there's so much good.
Speaker 2
Yeah. So I got a chance to talk a lot with Michael from Cursor.
He's the CEO. And honestly, I have such respect for them and what they're doing because a part of their whole mission is that.
Speaker 1
Sorry, real quick. Cursor is an AI coding tool.
So
Speaker 2 you can just type in what you want and it'll generate code and it can use your project as a means to generate things more accurately than, say, going to a website and putting in chat chippity.
Speaker 2 And so I got a chance to meet with him, talk with him about a bunch of like just stuff they're doing at Cursor.
Speaker 2 And a huge of it that I just have so much respect for them for is that they think the programmer is important.
Speaker 2 When they look at the world and where they're going, some people would probably see them being like, oh, they're trying to take jobs from people.
Speaker 2 Instead, they're saying, no, we're enabling people to do anything. And we think that the human element to programming and being a part of it is so important.
Speaker 2 We're trying to build an editor that makes it easy so you don't have to go somewhere else to use it, but for you to be able to program and do stuff.
Speaker 2 Like it's super important for you to learn these skills because that is going to make you go from okay to great.
Speaker 2 And we can help you get over the the hurdles by having ease with ai and so when i hear these things i look at the world because you know it's super super easy to see the the paul grams and the other people being like 10 000 lines a day they're going to build everything you're going to do nothing right like um i he also doesn't have that accent that's my accent for everybody i i'm i'm loving your accent thank you
Speaker 2 thank you i'll start getting some irish ones coming out here soon
Speaker 2 but when i see all these things it makes me so hopeful because what i like through my experience through the community what i see is that people just need like positive voices and they're going to go and they're going to go learn stuff, right?
Speaker 2
Like it motivates them to go learn stuff. And then they realize they can take control of their life.
When I was young, how did I get programming information?
Speaker 2 I either had to look at the only code I could find and guess what it does or buy a book from Barnes and Noble if they even had a book.
Speaker 2
Now you can just like ask questions in your super stupid way and it knows what you're trying to ask and say, hey, you need this. This is how it's said.
This is what it means.
Speaker 2 You get like so much more access to stuff. And the people who are taking advantage of that, I get so many messages being like,
Speaker 2
you were right. This is great.
Programming is fun. I'm learning and I actually got a job.
I'm taking control of my life.
Speaker 2 Like I actually get to have my own kind of like freedom and responsibility for my own life as opposed to feeling kind of like a, you know, like a leaf in the current.
Speaker 2
It's like there's a lot of opportunity. So when I see that and I see AI, I get happy because at the end of the day, it's not taking our jobs.
It's going to take some level jobs.
Speaker 2 Things are going to change. Yeah, the paralegal probably is going to have a very hard time in the next little bit.
Speaker 2 But those that know how to use a, I know everyone says that phrase, I'm not even convinced it's even real.
Speaker 2 Those that are willing to learn, regardless of how easy or hard it is, those are going to be the people that are going to have a really awesome, like potential future.
Speaker 2 And I don't think it's just going to evaporate tomorrow. And so I'm like super hopeful because I think in two years, there's going to be so many more people that have access.
Speaker 2 to the ability to get that education, to get that life-changing stuff than there will be today. And so for me, I look at it like as a super positive future, despite all the potential negative sides.
Speaker 3 How do you, how do you think that, because I think that I mostly agree with that outlook and it's a good long-term sentiment.
Speaker 3 How do you think that contrasts with the guy who just graduated with his CS degree? Where if he had graduated eight years ago or 10 years ago, that guy was pretty much guaranteed a job out of school.
Speaker 3 And now he's entering a job market that is much, much more challenging, where that degree doesn't feel like it translates into anything right now.
Speaker 3 And I totally agree where the most capable and willing to learn, people who can make the best of whatever circumstance they're in, there's always a person.
Speaker 3 who will rise to the top and become successful. Those are also the people who are most inclined to share their rewarding stories with people that they've learned from.
Speaker 3 But to the all the people that are kind of stuck in the glut of the middle right now,
Speaker 3 what do you say to those people?
Speaker 3 Like it's it's and it's not to be discouraging because at large, I do agree with you. And I think the positive trend over the longer term,
Speaker 3 I just think something we've talked about a lot is there's an unfortunate thing happening right now where I've said openly, you could be me,
Speaker 3 the exact same skill set, the exact willingness to work. And but if you graduated five years after me, you would be in a totally different position that is much, much more difficult.
Speaker 3 So how do you kind of reconcile that with what you're saying now?
Speaker 2 So I first want to preface one thing.
Speaker 1 I'm not talking about all positions or all graduates.
Speaker 2
Okay. So I know, I don't, I don't know about business.
I don't know anything of what they do. I'm not even sure if those degrees were ever real to begin with.
But they're there, right?
Speaker 2 I'm not going to talk about that.
Speaker 3 My business degree is fake.
Speaker 2
Or civil engineers or mechanical engineers or even electrical engineers. Like those, I'm just going to talk about programming because that's the thing I do know.
I will say it this way.
Speaker 2 I graduated, what, 15 years ago? Yeah, 15 years ago. So, 15 years ago, I graduated, and I'll tell you this much: that there were a group of people in my class that tried really, really hard.
Speaker 2
And all of those people were hired. Then there was a group of people in my class that didn't try.
None of them got jobs. Like 40% of my class did not get jobs when they graduated.
Speaker 2 That's a very large percentage.
Speaker 2 And I'm under the personal belief that the only difference between now and then is that the people who were trying really, really hard had a, it was a requirement of pretty much having a four years of trying really, really hard.
Speaker 2 Whereas now it's like, oh, I did a summer boot camp. Why don't I have a job? I'm trying super, super hard.
Speaker 2 And it's like, yes, you are trying super, super hard, but your time scale is like vastly different than me.
Speaker 2 Like where I was at by the time I was applying for internships versus where you're at applying for internships, like I was significantly more capable than you. I had so much more work.
Speaker 2 I had so much more of these things.
Speaker 2 And so I do think that there's a whole time component that has been shifted on its head that we just are not accounting for at all like no one nobody's being like oh yeah but this person's only been learning for three months it's why don't they have a job
Speaker 2 right like there's a whole time scale going on there and i do i i i don't know how much it's actually benefected for those that are like the tryhards because i've i have so many people that i know that are the tryhards and they are getting jobs They are getting good jobs and they are completely junior, 19-year-olds getting everything that they need because they've been trying since they've been 12 years old.
Speaker 2 They've taken the long, long road of making a craft as opposed opposed to just trying to get the job. And I know that could be very defeating,
Speaker 2 but the problem is, is that so much of life and so much of expertise is not one on a 100-meter dash. It's one on a marathon.
Speaker 2 And being able to push through that for long years, like I had to live in an apartment where the guy below me to threaten to kill me and he was always smoking meth and all that.
Speaker 2 And it was just crazy down there, right? But that's like, that's part of my story is having to live through that situation working 80 hours a week, trying to figure out how to get a job.
Speaker 2 And so I spent multiple years, 80 hours a week, trying to get as good as I can to get a job in this industry. And so that's kind of how I look at it is that I do think we have, we got a bit,
Speaker 2 we got a bit messed up in like the Zerp era, the zero interest rate era.
Speaker 2 And that, like, I do think that that really, pardon my language, like that fucked up everything. Like that's actually where people are like, oh, well, why don't I get a job?
Speaker 2
I've been doing this for six months. Like they used to pay people $100,000.
I'm like, no, that was actually a broken moment. Like that was the bubble.
That was the true bubble.
Speaker 2 And now we're returning, I think, more towards regularness, where it's like the people who are trying, the people who are really invested are going to find something still good at the end of this rainbow.
Speaker 2 Now, in 10 years, it will be the same. It will be different, but to what level different? I have no idea.
Speaker 2
That's my hopeful thing. Hopefully, that white pills people in the sense that you get the control.
It's up to you to say no. It's not up to somebody else.
I've always had this thing.
Speaker 2 So, being from Montana, being from kind of on the outsider, coming into Netflix, one of the big things was everybody else was like these Stanford graduates, these Harvard graduates.
Speaker 2 And I will tell you this much:
Speaker 2 you get looked down on quite a bit from being from the South, having a Southern accent or being from Montana, being from these places that are considered backwards.
Speaker 2
Like you get a lot of discrimination back in those days. I had a lot of hurdles that I had to get over.
And what I realized is that, you know, we used to talk about diversity and inclusion.
Speaker 2
Like I realized that inclusion, they used to say is like, or diversity is. bunch of people have to dance.
Inclusion is asking someone to dance.
Speaker 2
And so that's kind of what they say is like everyone gets to dance. And I kind of always hated that because I realized I never got to dance.
And the real thing is that you have to make your own thing.
Speaker 2 And so I would, I would go to people and be like, you're dancing with me because I'm going to make this thing work because like I want to be here.
Speaker 2 And so I think there's a whole level of control that people still have in their life and especially just around technology that's just super magical that I don't know if it exists anywhere else.
Speaker 2
So I encourage everyone to don't just go to the dance. Ask people to dance, right? Get in there, be a part of things.
And you will find that there's a lot more out there. So that was my passionate.
Speaker 1
That was great. That was awesome.
Thanks so much, Brian, for coming on.
Speaker 4 I appreciate it. Thanks, everybody, for coming on.
Speaker 1
Watching this episode of Lemonade Stand. I hope you're motivated and terrified at the same time.
Yeah. See y'all next week.
Bye, everybody. Thanks for watching.
Thank you.