The Truth About Software Development with Carl Brown
In this episode, Ed Zitron is joined by Carl Brown, a veteran software developer and host of The Internet of Bugs, to talk about the realities of software development, what coding LLMs can actually do, and how the media gets it wrong about software engineering at large.
https://www.youtube.com/@InternetOfBugs
New GitHub Copilot Research Finds 'Downward Pressure on Code Quality' - https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx
Report: AI coding assistants aren’t a panacea - https://techcrunch.com/2025/02/21/report-ai-coding-assistants-arent-a-panacea/
Internet of Bugs Videos to watch:
Debunking Devin: "First AI Software Engineer" Upwork lie exposed!
https://www.youtube.com/watch?v=tNmgmwEtoWE&t=3s
AI Has Us Between a Rock and a Hard Place
https://www.youtube.com/watch?v=fJGNqnq-aCA
Software Engineers REAL problem with "AI" and Jobs
https://www.youtube.com/watch?v=NQmN6xSorus&list=PLv0sYKRNTN6QhoxJdyTZTV6NauoZlDp99
AGILE & Scrum Failures stuck us with "AI" hype like Devin
https://www.youtube.com/watch?v=9C1Rxa9DMfI&t=1s
YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.
---
LINKS: https://www.tinyurl.com/betterofflinelinks
Newsletter: https://www.wheresyoured.at/
Reddit: https://www.reddit.com/r/BetterOffline/
Discord: chat.wheresyoured.at
Ed's Socials:
https://www.instagram.com/edzitron
https://bsky.app/profile/edzitron.com
https://www.threads.net/@edzitron
See omnystudio.com/listener for privacy information.
Listen and follow along
Transcript
This is an iHeart podcast.
Be honest, how many tabs do you have open right now?
Too many?
Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.
Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.
Everyone's cooped up in their house.
I will talk to this robot.
If you're a truly engaged activist, the government already has data on you.
Driverless cars are going to mess up in ways that humans wouldn't.
Listen to Close All Tods, wherever you get your podcasts.
There's more to San Francisco with the Chronicle.
More to experience and to explore.
Knowing San Francisco is our passion.
Discover more at sfchronicle.com.
Here's something good on women's health and longevity, a new podcast on iHeart.
Join us for groundbreaking conversations with renowned medical experts.
They'll share the latest breakthroughs, the good news about women's health, and the simple steps women can take to help them live healthier and happier every day.
Be sure to listen to our episode, Period Positivity, Talking to Our Daughters, where we explore how period positivity begins with open, informed conversations brought to you by our period care partner, Always.
That can be found at Walgreens, the women's well-being destination, supporting every stage.
Listen to hear something good on women's health and longevity on the iHeartRadio app, Apple Podcasts, or wherever you get your favorite shows.
Caesar Canine Cuisine asks, Why does your dog spin?
Because he wants a Caesar Warm Bowl.
Caesar Warm Bowls are microwavable meals for dogs made with real chicken and delivering an irresistible aroma that gets dogs excited.
Look for Caesar Warm Bowls in the pet food aisle.
Cool Zone Media.
Hello and welcome to Better Offline.
I'm your host, Ed Zetron.
Today, I'm joined by Carl Brown, a veteran software engineer and host of the excellent YouTube channel Internet of Bugs.
Carl, thank you for joining me.
Thanks for having me.
So, I'm going to start with an easy one.
What is a software developer?
Like, what actually is that?
So, basically, what we do is we take
ideas about
problems that people want to solve generally,
and we
write software, we write code that tells computers instructions how to make the computer do
the thing that needs to do to solve the problem the person asked us to solve.
Right.
Gaming programming is a little bit different, but that's most software development is basically that.
And this is another quite silly question, but necessary.
How much of that is actually writing code?
It It depends on how good
the people that are asking for stuff is.
As a general rule, I would say maybe
between 10% and 25%.
Okay, just really want to be 10 to 20%.
Even if we say 30% of the job, which is more than you said, that means the majority of this job is not actually writing code.
Right.
Now,
that's largely for folks that are...
farther up the chain, right?
So if you're fresh out of school and you don't really, you're not in the job, the, you don't understand how to manage requirements or any of that kind of stuff yet, someone's going to basically hand you a thing to do.
And in that kind of case, you're going to be spending a lot more time writing code than that.
But for me, it's, you know, it's far, far more
talking to people and stuff than actually writing code.
Right.
And the reason I asked that, and the reason we're doing this as well is that there have been a lot of stories around like LLMs replacing coders, LLMs replacing engineers,
claiming that junior software engineers will be a thing of the past due to LLMs.
How much validity is there in that?
Well, when it comes to the really, really, really fresh out of school kids, right?
That you have to basically break everything down and hand them little chunks of work.
An LLM can kind of do that, although the kid will get better over time and the LLM is pretty much fixed, right?
Right.
But past that,
it doesn't do a good job of being able to do any kind of long-term thinking.
And that's largely the job, right?
I mean,
this is not a set of,
you know,
I come in today, I do a thing today, I come in tomorrow having no understanding of what happened yesterday and do another self-contained thing and so on and so forth, right?
That's not the job.
The job is a long sequence of building up on things day after day after day after day until we get to the point where the whole thing together works and does what it's supposed to do.
So
I think that I've known, and one of the reasons I had you on as well, is that really, there are so many of these stories.
They're claiming that the software engineer's job is gone, that these companies will be writing all of their code with AI.
And it doesn't even seem like that is possible.
One of your videos, you did a really good thing around like the 20 to 30%, I'll link to this in the notes, 20 to 30% of code behind Meta, and I think Google it was, is written by AI now.
Again, how much validity is there to that?
Well, I mean, so if
one of the quotes was something to the effect of 30% of the code is suggestions that were given by autocomplete that a human accepted, right?
Right.
Which could be as much as, you know, the, the, the thing said, oh, wait, you spelled this wrong.
Let me give you a suggestion about how to spell it correctly, right?
I mean, um, how, how much of the actual text that you write is, you know, is corrected by a spell checker, right?
If all that counts as AI, then what percentage of your stuff is written by AI, right?
Well, in my case, in my case, absolutely nothing, but that's just a freak.
I'm a just a complete freak.
But no, I get your point.
And it's without being a coder myself, it's something I've really noticed across these stories where people just kind of blindly push them out and they say, oh, yes, 20 to 30% of the code is written by, but there's no verifying this.
And also,
it feels like it might create a bigger problem, which is, say, we accept this idea, even though I don't, and it sounds like a pretty spurious one, kind of silly to do so.
At some point, isn't code not just the series of things that you write to make a program work it's connected to a bazillion other things which
if you don't know why that was written because you had something generate it is that not a huge problem
yes but worse what we've what we're finding when code gets generated is that basically you end up doing the same thing in a bunch of different places but in each one of those different places you do it a different way Can you give me an example?
So, for example,
when you need to go fetch a thing from a server, right?
Well, over here in this code, you fetch a thing from a server.
Over here in the code, you fetch a different thing from the server.
Normally, you'd be able to use the same block of code to do that.
So if there's a mistake in it, you can change it once and it's fixed everywhere, right?
But the way the LLMs work is you say, hey, I want to fetch a thing from the server.
And it says, cool.
And it writes a whole thing for you that may or may not work the same way as the previous one.
Right.
And so now
you find, okay, under some circumstances, we're having a problem fetching things from the server.
I don't know which one of these 12 implementations that go fetch from the server is the one that's actually causing the problem.
Right.
Also, isn't there a pro, isn't there a security issue of having large language models?
Like,
wouldn't all the code be quite similar or at least more similar, depending on if everyone's using Claude or everyone's using, well, GitHub Copilot, I guess, is Claude now.
No, not really.
It basically kind of picks a random number at the beginning and goes, okay, so that's the, I think of it kind of like you deal a deck of cards, right?
Whichever deck of card gets turned over first, that's the beginning of the autocomplete that it starts.
And so, depending on which example it's, I I don't want to say thinking of, but depending on which example represents that, I'm drastically oversimplifying, but depending on which example is represented by that card, it's going to go down one path or another.
Right.
And so, what are they actually, what are these large language model coding tools actually good for?
Because I get a lot of people who respond by saying, this is proof that AI is a big deal.
And I'm just kind of like, I'm not even looking for a particular answer.
I'm just truly, what's useful about them.
So, they are decent at when you know what you want and what you want is a fairly simple self-contained thing
and you know how to tell whether or not the self-contained thing does what you want, it can type it faster than you can.
Like autocorrect.
Basically, yes.
It's like autocomplete.
If you know exactly what you want.
Yeah.
I mean, so I use it a lot because I program in a bunch of different programming languages a lot, right?
On different projects at the same time or on the same day or the same week.
And it's really easy for me to go, okay, wait, which language am I in right now?
Okay, how do I do this in this language?
Right.
So it's kind of you can actually understand the generation, though, when it comes to that.
Yeah, it's like I know what kind of loop I want, but I don't remember the syntax for this particular language, or I don't want to.
So it's, I use it kind of like a Google Translate kind of thing to go from one programming language to another sometimes.
But you wouldn't trust it to build a full software package?
Oh, not at all.
Why not?
Well, it wouldn't work to start with.
Why wouldn't it work?
Well, I mean, so I've done some experimentation on that where where I've taken fairly complicated
challenges.
Challenges that were intended for programmers to basically get better at their craft and that kind of thing.
And I've run AI, you know, told it step by step, okay,
the challenge says this is your next step, do this.
The challenge says, this is your next step, do this.
On really simple challenges in programming languages like Python that it's got a lot and a lot of examples for, it does okay.
Past the point where you're in the really simple kind of language things they just they sometimes get to the point where they can't even create anything that builds at all huh why is there why do so many engineers swear by it then
um
honestly i'm not sure to what extent the engineers are swearing by it um i've talked to a lot of folks who are like you know my group at you know this big bank you know friend of mine um My group is getting co-pilot jammed on our throats whether we want it or not.
And the executives are all really excited about about it, and none of us are.
Interesting.
So it's executive.
I've personally had this theory that it's like executive pushed and that it's all about it's all about what the bosses want to see rather than even do.
Sorry, no to move my cat out of the way.
There's there's a lot of wish fulfillment.
There's a lot of like, we want to not have to deal with these programmers anymore.
So we would rather deal with the AI thing.
And we're just going to hope that the AI thing is going to be, you know, just as good as the programmers or close to it's just as good as the programmers and not nearly as annoying.
Seems like a definitional, well, maybe that's not the right word.
Seems like the difference between a software engineer and a software developer almost, because it's not just about flopping code out.
It's about making sure the code does stuff.
Yeah, I mean, those terms get
mashed together.
Yeah, I mean, so part of the problem is that
I live in Texas, and in Texas, you're not allowed to call yourself an engineer unless you pass the engineering exam.
Right.
So I can't literally, I literally can't call myself a software engineer legally in Texas as I understand it.
I'm not a lawyer, but that's my understanding.
So it's like the terms get all confused.
Right.
So somewhat related.
What is it that people misunderstand about the job then?
Well, I mean, so one of it is what you said earlier, which is that a very small percentage of the job is actually slinging code.
A lot of it is basically trying to figure out what it is the code should do based on what you've been told that the problem, you know,
the solution to the problem that you're trying to solve.
Another thing thing is that
a lot of the
problem with the job
is that every little decision builds up over time.
And at some point a bug is going to happen.
They're inevitable.
And when that happens, basically there's this process where what you need to do if you're being competent is roll back through that series of decisions, figure out what caused that bug, and then figure out what other bugs are likely to have been caused by that same set of of decisions and then fix not just the bug you're working on, but the bugs that, you know, not just the bug that's been reported, but the bugs that might have also been caused by the same problem, right?
Right.
And that kind of long-term thinking is not a thing I've ever seen an LLM exhibit at all.
I talk about it like LLMs or, you know, generative AI is good at solving riddles, but actual software development is more like solving a murder.
Yes, you said that in that wonderful video.
Yeah.
And
it almost feels as if we are building towards an actual calamity of sorts.
Maybe not an immediate one.
Maybe it'll be kind of sectioned off into areas because you've got a new generation of young people coming into software engineering or what have you, learning to use AI tools rather than your videos definitely talk about this as well.
actually how to develop software and make sure it works and make sure that it has the infrastructural side in line and also that you're building it with the long-term thinking of someone else might need to understand how this works.
And they're not learning that.
So you've just got a generation of kind of pumping the internet and organizations with sloppier code.
Yes, although, I mean, one of the problems we're having at the moment is that the hiring process for really junior engineers is actually pretty broken at the moment.
And a lot of people are not hiring people that are fresh out of school because they're expecting that the AI will be able to do that.
Basically, a senior or a mid-level developer with the benefit of AI, with the benefit of AI, that's in air quotes,
will be able to do the work of that person plus a couple of fresh outs that they normally would have hired, but they're not hiring at the moment.
There's some statistics about how the people that are fresh out of school these days are historically
underemployed relative to the general population, at least in the U.S.
where I live.
It also feels like there's no intention behind the code.
Like it's just, if you're just generating it, you don't really know why you made any particular.
You could say I chose these lines, but is that at some point if you have large amounts of software developers using it, however large, but the young people in an organization using it to generate their code, they're neither learning to write better code, nor are they learning how to develop.
They're just learning how to fill in blocks.
They'll never grow within their job.
Yeah, I mean, the, the, the trick is that those of us that have spent a whole lot of time debugging software, right, and like finding the problems and digging into them and trying to to figure out what's going on.
That kind of stuff,
it's going to be really hard for younger folks to get
hired into those jobs so that they have time to build the experience to be able to do that.
And I'm afraid we're going to end up with basically an older generation or generations retiring and a newer generation that hasn't had the experience of doing that kind of debugging.
And then it's going to be a real mess, especially since from what I can tell, the code that the AI has generated are a lot buggier and buggier in weirder, like random-ish kind of ways.
Stuff just kind of comes out of nowhere in a way that I don't.
I mean, I've debugged code from people that don't speak the same languages as I do, all that kind of stuff.
AI code is different.
It's just like, okay,
why would anyone want to put that block there?
That doesn't have anything to do with what we're trying to do at the moment.
And why is that?
Is it just because it's probabilistic?
I guess so.
I mean, it's hard to say why.
I I mean,
the idea of why an LLN does what it does is kind of a, you know, anybody's guess.
Yeah, it's just,
I keep thinking of the word calamity because you sent me these studies as well about how they found like a downward pressure on the quality of code on GitHub.
Would you mind walking me through what that means?
Yeah, so basically what that study found, there have been a couple of them, but what that particular study found is that there is what they call code churn has gone up.
And code churn is basically when you push something, you like add a line of code, you push it into to test or to production.
And then in a short period of time, like I don't remember exactly what the definition was, like in a month or two months, that line of code changes.
Right.
So basically what that means is that the line of code that got created,
somebody decided after it got put in, oh, wait, no, that doesn't work right.
We don't, we're not happy with that.
We're going to change it to be something else.
Right.
Right.
And the percentage of lines or the number of lines that
get changed fairly quickly after they get submitted has gone way up since the
since the implementation of GitHub Copilot.
So this is across like most of the giant millions of lines of codes on GitHub.
And for a simpleton, me,
why does it being changed, why is changing it so often bad?
Well, I mean, so, I mean, if you do it right the first time, you can move on to the next thing.
Ah, right.
If it's like, you know, if you're writing a document and you put put the document in there and then you like, well, you're in GitHub, you're in Google Docs and you're like tracking changes.
And it's like, okay, this sentence has changed 17 times.
Obviously, the person isn't happy with the way that sentence is.
Right.
So the generative code isn't good.
Right.
And so people see the need to change it.
That's the presumption.
Yes.
And
so it also said the code quality itself, is that the only way they, is that the only way they measured it?
Or is it, there are other things as well?
So they measured that.
They measured
like
moved code.
Yeah, the moved code.
The thing I was talking about earlier where
you've got a bunch of different places in the code that all do the same
try to do the same function, but they do it in different ways.
Normally what would happen is you'd have your, you do it, you do a thing here, right?
And then at some point in the future, you need to do that thing again in a different place.
And so, what you do is you would move that original block that does the thing someplace else.
And then you would call that block from both places.
Right, because it already works.
Right.
And then that way you've got, you know, however you go fetch stuff from the server, you're fetching it the same way.
But with this thing, basically, instead of doing that, you've got copy-paste, okay, let me put another one here, let me put another one here, let me put another one here.
And it's a maintenance nightmare.
There's more to San Francisco with the Chronicle.
There's more food for thought, more thought for food.
There's more data insights to help with those day-to-day choices.
There's more to the weather than whether it's going to rain.
And with our arts and entertainment coverage, You won't just get out more, you'll get more out of it.
At the Chronicle, knowing more about San Francisco is our passion.
Discover more at sfchronicle.com.
Be honest, how many tabs do you have open right now?
Too many?
Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, doom scroll so you don't have to.
Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.
Everyone's cooped up in their house.
I will talk to this robot.
If you're a truly engaged activist, the government already has data on you.
Driverless cars are going going to mess up in ways that humans wouldn't.
Listen to Close All Tabs, wherever you get your podcasts.
Want to save big at the pump?
Count on your WEX Fleet Card for major savings.
Save an average of $1,500 annually nationwide on fuel.
With the Wex Fleet Card, you can do more.
Apply now at Wexcard.com.
Terms apply.
So I've shopped with Quince before they were an advertiser and after they became one.
And then again, before I had to record this ad, I really like them.
My green overshirt in particular looks great.
I use it like a jacket.
It's breathable and comfortable and hangs in my body nicely.
I get a lot of compliments.
And I liked it so much I got it in all the different colours, along with one of their corduroy ones, which I think I pull off.
And really, that's the only person that matters.
I also really love their linen shirts too.
They're comfortable, they're breathable, and they look nice.
Get a lot of compliments there, too.
I have a few of them.
Love their rust-coloured ones as well.
And in general, I really like quints.
The shirts fit nicely, and the rest of their clothes do too.
They ship quickly, they look good, they're high quality, and they partner directly with ethical factories and skip the middleman.
So you get top-tier fabrics and craftsmanship at half the price of similar brands.
And I'm probably going to buy more from them very, very soon.
Keep it classic and cool this fall.
With long-lasting staples from Quince, go to quince.com/slash better for free shipping on your order and 365-day returns.
That's q-u-in-n-ce-e.com/slash better.
Free shipping and 365-day returns.
Quince.com/slash better.
So,
for the audience as well, how does a software developer actually use GitHub?
Like, really simple stuff, I realize, but I think it's important for people to, it just occurred to me that this may be something that most listeners don't know, which is good to, I think it's good.
Yeah.
So, so what we do is we basically make changes to code.
We get to the point where we, the developer, are happy with the way it's set up on our machine.
And then we do what's called a push, and we basically send all that code, submit all that code up to GitHub.
And then
theoretically, you know, there can be automatic processes that kick in that, like, check that code for particular things and run tests on it and that kind of stuff.
And then at some point, we have a thing called a pull request, which is basically a thing that says, okay, I would like this to go into production now, or more or less.
I would like this to get promoted into the next phase now.
And then someone theoretically will look at it and go, okay, that's fine.
And then click the yes button or say, hey, you forgot about this, go look at this or that kind of thing.
Right.
And the pull requests is kind of the unit of work, kind of.
So with GitHub, you almost use it like an organizational code dump or where you centralize all the code.
Code.
Sorry, just for the
non-coding as well.
And I think it's, I think that the LLM industry has done a really good job of dancing around these terms and selling them to people like me.
Well, they weren't selling.
They didn't work on me.
I am.
too stupid.
But it's where they've just like been like, okay, yeah, well, lots of people use Copilot.
That's good.
And this is good because software's coding.
But it kind of feels like, I don't know, all of this is taking the one thing, like one major part out of software development and ruining it.
And I don't even mean coding.
I mean, it's the intentionality behind software design and infrastructure and maintenance.
Like there's, it seems like they're removing intention in multiple parts.
So the way I would say it is when they talk about the AI being able to do the work of a programmer, what they're doing is they're devaluing all of the stuff that's not just hacking code.
Right.
And so what they're saying is that basically the job of a developer is basically just, you know, typing, basically.
And that all of the work that we do to understand what the problem actually is and how it needs to work and, you know, what other problems are likely to show up when we try to do that and how to avoid those things as we go and that kind of thing.
All that work is basically not important.
And
I mean,
I'm going to say two words, which will probably annoy you.
This is, I feel like vibe coding is the other part of this.
So if I'm correct, correct me if I'm wrong, vibe coding is just typing stuff into an LLM and software comes out and hopefully it works.
Yeah, vibe coding is basically when you
intentionally try, well, I don't know about intentionally, but basically you make a point of not digging into the code and looking at what the LLM is doing.
And you basically say, okay, I would like something that does X, right?
I would like a game where I fly airplanes around a city or something, right?
And then you get what it spits out.
And then you say, you know, okay, let me try it.
Okay, well, can we have more airplanes?
And okay, can we have some balloons with, you know, signs on them now?
And can we do this kind of thing?
And then you don't think about what the side effects are.
You don't think about what things could go wrong.
You don't think about air conditions, that kind of stuff.
And you just hope that whatever you look at and has the right vibe and that, you know, if it, if it, if it looks like kind of what you wanted, that probably it's going to be fine or hopefully it's going to be fine.
How do you feel about vibe coding?
Um, so I do it sometimes.
Vibe coding is great for a thing that you're going to do once and then throw away.
Yeah.
Right.
So if it's like, you know, okay, I want to, I want to do a thing.
I want to translate this thing to, you know, I want to make this table go into this format over here or that kind of thing.
You do it.
You get the output you want.
You throw the code away.
no big deal, right?
Like a prototype almost.
Yeah, basically.
And so, you know, we call them spikes or tracer bullets sometimes.
It's like a, you know, let me get a thing that works at all, right?
And then let me see what I can learn from that to move into my big maintainable project.
But for anything that's like, you know, this thing needs to run for a while.
This thing needs to not get hacked.
This thing needs to, you know, not crash, it's a really bad idea.
Yeah.
And at some point, I feel like someone building a product that they don't really understand the workings of, it's kind of almost identical to generating a story with ChatGPT, except kind of more complex and more prone to errors.
Yeah.
And the other thing is that there's an adversarial component, right?
So
people will intentionally try to go hack that thing that's sitting on the internet.
Oh, right.
In a way that they don't intentionally try to go mess with the story that you wrote.
Right.
Right.
And so even if it works all by itself, that doesn't mean it's going to work when somebody starts pounding on it intentionally trying to break it.
And if they can break it, then that's a whole other set of problems that you now have.
It feels like quality assurance is just never part.
Oh, no, are they claiming they're going to do quality assurance with large language models yet?
They must.
Some people are.
Yeah.
I mean, to be honest, a lot of companies have just
been getting rid of quality assurance over the years, right?
Oh, really?
When I worked at IBM, we didn't have quality assurance at all.
They would, no, seriously, they would do this.
I was in IBM's cloud group and they would do these, these, what do they call them?
Uh, hackathon kind of things.
They didn't call them that.
I don't know what they called it, but basically, everybody in all the other development groups would get together and basically bang on the code that was about to get released from some other group to try to see if they could break it.
Right.
But they didn't have dedicated testers anymore because they decided, I guess, that they didn't think they were worth the money.
I don't know.
But we had some issues because of that.
When
did the movement happen?
I was in, I don't know, so I was at IBM in like 2017, 2018.
Right.
So it would have been sometime prior to that.
When I got there, they didn't have any QA folks.
Really just feels like it's the management problem as well.
It's the management cutting people.
I would think so.
It's a real shame as well.
And I...
Forgive me if I'm forgetting exactly where.
You've mentioned as well that there is like compound scar tissue from AI-generated code, a larger problem of lots of this code being generated with AI.
Well, that's
that's my expectation, right?
Yeah, just a potential worry.
Right, right.
So that the more of this we get and the more issues that we have,
the more stuff we're going to have to dig out of, right?
And what I'm honestly envisioning at some point in the, I don't know how long this will take, the crypto bubble took way longer to pop than I expected.
So I don't know how long it's going to be before this one does.
But I'm expecting that there's going to be this big push to try to clean up a bunch of this crap here in a few years once people realize that a lot of the code that's being written and generated right now is has all of these vulnerabilities that nobody's bothering to check for at the moment.
Right.
And those vulnerabilities, again, non-technical way, I read that it was like they call upon things on GitHub that don't exist.
So bad actors create something that resembles what it's pulling from.
That's so that's that's a more specific kind of one.
I mean, there are a lot of things.
I mean, so there have been computer viruses since the 80s, right?
Right.
You know, the Morris worm and that kind of stuff.
And basically there are known ways that code,
if you, you have to write it in a particular way in order for it to be secure, right?
Right.
And even then, sometimes people come up with novel ways of making something not secure.
And how do you have to write it to make it secure?
If it's possible to explain?
Well, I mean, there's a big, long list of rules, right?
I mean, one thing you can do is you can use languages that are what they call safer.
Right.
But still, you have to make sure that any input that you get from the network, you're really, really careful to make sure that it doesn't get to overwrite parts of your program that actually execute things.
Right.
You have to make sure that it doesn't have the opportunity to be able to write to places on your disk that it shouldn't be able to write to.
You have to be able to make sure that it doesn't have access to read data that it shouldn't be able to read.
You know, all that kind of stuff.
And when those things don't happen, you end up with, you know, so-and-so got hacked.
You know, turns out that somebody we think maybe China is reading the email of the, you know, people in Congress.
Um,
you get another letter in the mail that says your social security number has been, you know, leaked by, you know, some credit checking firm or something like that.
Even, even like, I think it was what the big target data breach from a while back was through the HVAC system.
It was, it was, it's just, except now we've got, and that was with humans humans writing the code.
Right.
Imagine if we didn't know,
oh, God, it really does feel like the young people are going.
Actually, no, I take it back.
You were talking about Agile the other day.
I'm going to ask you to explain that in a second.
But it's like, it sounds like for almost decades, they've been gnawing away at, management's been gnawing away at the sides of building good software and building good software culture.
Yes, I mean, there's an argument that says we never got it right in the first place.
But I mean,
I mean, if you think about it, software has been a thing for what, 50 years, 60 years, 70 years, right?
I mean, compare that to like construction engineering or bridge building or that kind of stuff, right?
We're still, you know, relatively speaking
in our infancy as a
industry.
You know,
it's been a constant evolution.
And a lot of times the things that were
the things that we did to solve a problem that we had ended up causing other problems, right?
So going back to Agile, in the long, long ago, right, we used to manage software projects the same way we manage like build, you know, bridge building and building building project, you know, construction projects.
And it turns out that when you're going to build a bridge, you know beforehand what you need to build the bridge to do.
When you're building software, a lot of times people are changing their minds as you go, right?
And you build a thing and you show it to them.
They're like, oh, why do we put this over here?
And why don't we change this and that kind of thing, right?
Right.
Because you don't have the same kind of constraints, physical constraints that you do when you're trying to build a bridge and so we got in this problem where you would create these project plans about how you were going to build this thing and you would never be anywhere close to on time because things would change the whole time right and so they created this thing called the agile methodology i'm drastically simplifying there were steps in the middle but basically so this this agile thing is where we instead of saying okay so this is what the whole project's going to look like we're going to be doing we're going to be done in six months and then things changing along the way we basically block off a thing called a sprint It's a week or two or a month, maybe it depends.
And then, you know, everybody picks their own sprint length.
And then you go, okay, I'm only going to talk about what's going to happen in the next sprint or two, right?
And then you get to the end of that two weeks and you go, okay, cool.
This is what we got done.
What do we want to do next?
And then, okay, that's what we got done.
What do we want to do next?
And that kind of thing.
And that way.
As you go, you have the opportunity to change things.
You have an opportunity to roll changes into the process, that kind of thing, right?
The problem with that is kind of the same way that dates always ran ran out
in Waterfall and projects can go way, way longer than they were expected to at the beginning because everybody's focused on just two weeks at a time.
And you never kind of take a big step back like you ought to and go, okay, wait, you know, we were supposed to be done, you know, two months ago.
When are we going to wrap this up?
Right.
And how has that led to things getting worse?
Is it that just software culture, software development culture has been focused on short term perpetually?
The short term is part of it.
Part of it is there are
unscrupulous developers out there that basically want to extend the length of the project so they can get more money out of it.
Right.
Right.
That's always the case.
But the other thing is that you end up with a real,
a lot of times you end up with a real lack of like long-term planning and long-term understanding.
Right.
Right.
Because everybody's, you know, same kind of thing, you know, companies are only worried about what happens next quarter.
Right.
If you're only worried about what's going to happen the next week or the next four weeks,
the things that you look on, look at, you know, tend not to have the longer-term implications that sometimes you need, right?
And there are times you get close to the end and you're like, oh, you know, we didn't think about this.
What's going to be a problem?
Yeah.
Yeah.
And also, if you're in a two or three week thing, you're probably not thinking even what you did last sprint.
Maybe last one, but not like two or three two or three ones ago.
Is this a problem throughout organizations of all sizes?
Is this a consultancy problem?
Is it everywhere?
It's most places.
There are some places that are
usually in startups, we're a lot more
ad hoc and we're a lot more
focused on trying to get things done.
Basically,
the idea is the larger you get as an organization, um, and the more money you're throwing at it, and the more management control you want,
um, the more of this overhead you put in place, and the more complicated things get, as just as a management structure kind of thing.
And does in the bigger it, so this is something you'd see in like a Google and an Amazon as well.
Oh, absolutely.
So, do you, do you think it has the same organizational effects or
um
largely, yes.
Um, the
so those organizations tend to be well
those organizations historically have tended to be
before the the the recent like insidification wave um those
um i'm assuming i can swear on this yeah yeah yeah
um the um the those organizations have historically been fairly more engineering driven, which means that you typically have people higher in the organization that are technical and have been programmers and who understand some of the implications.
And so they tend to try, at least we try, to run interference with management and to try to, you know, make sure everybody's on the same page and that kind of stuff.
A lot of
a lot, not all, but a lot of problems can get lessened if you have people in the organization that are at higher level whose job is not to manage people, but whose job is basically to keep track and coordinate between different groups that are doing different technical things.
Right.
To make make sure people aren't building the same thing, I'm guessing, or are building the right thing in the right way.
Yeah.
And that how what this group is building is going to impact what this group is building at some point in the future and making sure that when you get to the point where those two things need to talk to each other, they're both aware enough of what the other one is doing that the two things hook together correctly.
Yeah.
So based on my analyses of these companies, that's definitely gone out the window.
I mean, even with LLM integration, so there was a Johnson ⁇ Johnson story that went out in Wall Street Journal a couple of weeks ago where it was like they had 890 LLM project or generative AI projects, of which like the Pareto principle wins again.
10 to 15% of them were actually useful.
And the thing that stunned me about that, other than the fact it confirmed my biases, which I love, was the fact that there were 890 of the fucking things.
And no one was like, should we have this many?
That there was no like
software engineering culture that was like, hey, are we all chasing our tails?
Is this useless?
But it sounds like they were all focused on their little boxes.
Yeah, I mean, so the other thing, so understand that, again, greatly oversimplifying.
A lot of the new stuff that's happened with large language machines, large language models
and generative AI, people didn't expect, right?
It was kind of a surprise when you throw a whole bunch more data at a large language model and it started spitting out text in a way that nobody really, there was no like mathematical reason to expect it to be able to be as good at generating autocomplete stuff as it as it was right and so there's this belief that um if we did the thing and we unexpectedly got more than we asked for if we do more of the thing maybe we'll unexpectedly get more of what we wanted right that hasn't seemed to really pan out the last couple of years from what i can see but um but that that
we don't really understand enough about this to know whether it's going to work so we might as well throw spaghetti at the wall and see if it sticks because it might
kind of mentality is kind of pervasive at the moment.
And everybody's, there's a lot of FOMO.
There's a lot of like, you know, well, our competitors are probably doing this.
And so
we don't want to get left behind.
It kind of reminds me of the
rumors that they talked about back in the 80s when the CIA was doing all this psychic research because supposedly the Russians were doing psychic research.
And it was all complete crap, but both sides were convinced that the other side was making some progress.
And so everybody was dumping a ton of money into it.
LLMMK Ultra.
Exactly.
Yes.
Title of the episode.
There's more to San Francisco with the Chronicle.
There's more food for thought.
More thought for food.
There's more data insights to help with those day-to-day choices.
There's more to the weather than whether it's going to rain.
And with our arts and entertainment coverage, you won't just get out more, you'll get more out of it.
At the Chronicle, knowing more about San Francisco is our passion.
Discover more at sfchronicle.com.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my love life?
I think we will see a Twitch stream or president maybe within our lifetimes.
You can find Close All tabs wherever you listen to podcasts.
In business, they say you can have better, cheaper, or faster, but you only get to pick two.
What if you could have all three at the same time?
That's exactly what Kohir, Thomson Reuters, and specialized Bikes have since they upgraded to the next generation of the cloud.
Oracle Cloud Infrastructure.
OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment and spend less than you would with other clouds.
How is it faster?
OCI's block storage gives you more operations per second.
Cheaper?
OCI costs up to 50% less for computing, 70% less for storage, and 80% less for networking.
Better?
In test after test, OCI customers report lower latency and higher bandwidth versus other clouds.
This is the cloud built for AI and all your biggest workloads.
Right now, with zero commitment, try OCI for free.
Head to oracle.com slash strategic.
That's oracle.com slash strategic.
Lilly is a proud partner of the iHeartRadio Music Festival for Lily's duets for type 2 diabetes campaign that celebrates patient stories of support.
Share your story at mountjaro.com slash duets.
Mountjaro terzepatide is an injectable prescription medicine that is used along with diet and exercise to improve blood sugar, glucose in adults with type 2 diabetes mellitus.
Mount Jaro is not for use in children.
Don't take Mount Jaro if you're allergic to it or if you or someone in your family had medullary thyroid cancer or multiple endocrine neoplasia syndrome type 2.
Stop and call your doctor right away if you have an allergic reaction, a lump or swelling in your neck, severe stomach pain, or vision changes.
Serious side effects may include inflamed pancreas and gallbladder problems.
Taking Manjaro with a sulfinyl norrhea or insulin may cause low blood sugar.
Tell your doctor if you're nursing pregnant plan to be or taking birth control pills and before scheduled procedures with anesthesia.
Side effects include nausea, diarrhea, and vomiting, which can cause dehydration and may cause kidney problems.
Once weekly Manjaro is available by prescription only in 2.5, 5, 7.5, 10, 12.5, and 15 milligram per 0.5 milliliter injection.
Call 1-800-LILLIERX-800-545-5979 or visit mountjaro.lilly.com for the Mountjaro indication and safety summary with warnings.
Talk to your doctor for more information about Mountjaro.
Mountjaro and its delivery device base are registered trademarks owned or licensed by Eli Lilly and Company, its subsidiaries or affiliates.
So, okay, LM Kai Ultra aside, is this something you're seeing in software development, though?
Because I know I've seen it in management where it's just going to like, shove the shit in there.
This seems like it's an important thing, right?
Or is this, are you seeing it within software development?
So I am seeing it within
software planning, right?
So when managers are sitting down and saying, okay, we need to build this new thing.
We need to create a new group.
We need to split this group apart.
We need to decide what our headcount is going to be for next year.
There's a lot of, okay, and what do we think the AI is going to do next year?
And how many headcount do we think that's going to save us and that kind of thing, right?
There are some companies, Duolingo is one, Klarna is one.
Yeah.
OP, sorry, BP, the former British Petroleum, what, last year had a thing where they said they were cutting 70% of their contract software developers.
And in most of these, they've kind of rolled them back as well.
I don't think Duolingo has yet.
No, this is just being unfair to you.
They like a day ago.
Really?
It's so fun it's so funny it's so funny that this just keeps happening in exactly the same way it's like oh what a surprise human beings can do stuff yeah but it kind of gets back to i think what you've said about everything with llms it's like you can teach something to say yeah i think the right the thing you're looking for is this but you can't teach it context and that's been a point you've made again and again like the it seems the job of a software engineer is highly contextual unless you're like in the earlier days yeah and and i i liken it sometimes to the the memento guy from the memento movie right Where like can't form long-term memories.
And do you really want the memento guy to be the person that's building the software that makes the 737 Max
be able to compensate for its control input?
Yeah.
Well, the thing is, though, with that argument, they would argue, and I know that there is a better argument here.
They would argue, well, what if we just give it everything that's ever happened?
What if we just show every single thing we've ever done on GitHub?
Surely then it would understand.
So
what I have seen from the papers that I have read is that LLMs have a basically squishy middle context problem, kind of the way that you do, right?
So if somebody gives you a big document to read or a big long documentary to watch or something, and then they ask you questions, what they're going to find is that you remember a lot more from the beginning of that and the end of that than you do from the middle of that, right?
And LLMs have the same kind of problem, right?
And the other problem that the LLMs seem to have is that when you give them a whole bunch of instructions, just instructions, pilot on instructions, pilot on instructions,
they can either get confused and forget some of the instructions or they deadlock or they just start going, okay, I can't satisfy all of these, so I'm not even going to bother to satisfy any of them.
Or they'll pick one or two.
The fact that you can take a million
tokens and you can stick that in the memory block that
the GPU is going to process
doesn't necessarily mean that all of the tokens in that memory block are actually going to be treated equally and going to be understood, right?
In theory,
maybe
if you could
train
your,
if you could like custom train an LLM, and modify all of its weights based on exactly what your stuff was and do that like day after day after day after day as things changed.
Um, you would theoretically get better.
I still don't think it would be, you know, I still don't think it would understand the context as well, but that would be ridiculously expensive.
Yeah.
And at that point, you could train a person.
Yes.
I mean, the person would probably be more annoying.
So that's, I mean, the point, I mean, a lot of this seems to be really, you know, we don't like dealing with the Primadonna programmer kind of thing, right?
There's this, you know,
I mean, not just programmers, right?
We don't also don't want to deal with the
Prima Donna reporters or the Primadonna illustrators or the Prima Donna.
Get rid of these people.
Right.
These are annoying.
They ask for stuff.
They want money.
Yeah.
And days off and sick leave and, you know, healthcare.
This is disgusting.
How dare they?
It's so frustrating as well because it across software development and everything, but especially with software developers, it feels just very insulting because it doesn't seem like this stuff.
Actually, here's a better question.
Have you seen much of an improvement with like 01, 03, like these reasoning?
Do you think the reasoning models change things for the better?
And if so, how?
So
a little.
They don't make as many stupid mistakes
is basically what it boils down to.
Going back to your first thing, though, right?
I mean, so there was a piece, actually, a couple of pieces recently.
One of them was about, you know, tech workers are just like the rest of us.
They're miserable.
I'll give you links to these.
The other one was a Corey Doctorow piece that was like,
the future of
Amazon coders is the present of Amazon warehouse workers, or vice versa.
There has been a lot of deference given to software developers over time because
we have been kind of the engine that's making a lot of the the last 20, 30 years work.
And there's a desire to make that not so anymore, right?
And to make us just as interchangeable as everybody else.
I guess
from a economic standpoint, I kind of don't blame them.
I understand why they're trying to do what they're doing.
I don't think that the warehouse workers should be treated the way the warehouse workers are treated, you know, much less everybody else gets treated that way.
And it's been a lot worse since the giant layoffs at Twitter, now X,
when that happened and the thing didn't crash and completely burn like everybody was, or not everybody, but a lot of people were expecting it to.
The
sentiment became, well, maybe all this, all these software developers aren't as important as they, you know, we've always thought they were.
And, you know, we will see over time what the end result of that is.
My guess is it's going to end up being a mess.
But, you know,
I'm a software developer, right?
I'm going to,
it behooves me for it to be a mess, right?
So it might just be my bias that's getting in the way.
I actually, I think that you're right, though, because I remember back in 2021 and onward, the kind of post-remote work, the remote work, there was the whole anti-remote work push, but there was this the whole quiet quitting and things like that.
That's 2022, where it's like software engineering, they just
expect to be treated so well because 2021 saw the insane hiring.
You saw tech companies like parking software workers.
I think that played into it as well, where all of these companies who chose to pay these software engineers, they were the ones that made the offers, got pissed off that they'd done so and thought we should cut all labor down to size.
And then along comes coding.
Almost makes me wonder if most of these venture capitalists talking about this don't know how to code themselves.
You gotta wonder.
I don't know many that do.
Yeah.
I know some that have at some point, but
I just think it's at some point.
It's like they're not part of modern software development culture, which I know sounds kind of wanky, but I mean, just how an organization builds software feels like something they should know.
But then again, they don't know how to build a real organization either.
So who the fuck?
Yeah.
Well, I mean, honestly, a lot of it,
I've been in organizations that VCs basically killed
because, you know, we built a thing.
That thing was, you know,
a reasonable business, but VCs don't want a reasonable business.
They want either 100x return or they want a tax write-off.
And they don't want anything in between.
Right.
Yeah.
So, I mean, what they're looking for is really, I mean, they're not trying to run a regular business, right?
They're not trying to do the normal process.
They're trying to either, you know, hit one out of the park or throw it away and move on.
And so the rules for them are different because what they're trying to accomplish is not what the rest of us are trying to accomplish as a general rule.
It's a constant theme of the fucking show.
It's just like you have these people that don't code saying how coders should code.
Dario Ama Day the other day saying that this year we're going to have the first one-person software company with a billion dollars revenue or something like that.
And it's just,
I feel like there are some people who should not be allowed to speak as much sometimes, but it's just frustrating and insulting.
And it's, but now that you've got me thinking about it, it does feel like this is an attempt to reset the labor market finally coming for software developers.
And I don't mean finally in a good way.
Right.
I mean, it feels like that being in the, being in that organ, um,
being in that industry at the moment, it, it really feels like that.
Is it scary right now?
Is it scary right now?
Um, not for me
because I'm old enough to be semi-retired, right?
But I mean, I've been talking to a lot of folks.
I've been having a interviewing a bunch of folks that are listeners to my channel and kind of trying to get a feel for what's going on.
And I've talked to folks that are, you know, like I said, I talked to some folks that were like, you know, I work for a big bank.
They're cramming Copilot down our throat whether we want it or not.
I've talked to some folks that are like, every time I sit down with my boss, I'm thinking that, you know, this is going to be the day that I'm going to find out that my group is getting cut the way the other three groups in
the company is getting cut.
There's a lot of
artificial productivity requirement increases kind of thing, which is like what?
Just, you know, we, you know, we expect more tickets closed per, you know, two-week period than, you know, we've had before because we're giving you this AI now.
So you ought to be more productive, that kind of thing.
Would a ticket necessarily be something that you just write code for?
Or is it more than just that?
Well, so generally it's more than just that.
But generally, the ticket, that's kind of the way that we track the work that we do
in a lot of organizations, right?
And some tickets are like, I'm building a new thing.
And those are kind of easier to predict.
And some tickets are, this thing isn't behaving right.
Go figure out where the bug is.
And those are a lot harder to predict.
They have these things.
Agile has this thing called a velocity graph, where basically you see how many tickets per per person get closed over time and people want to see the slope of that that line change because they're giving you ai
wow and i'm guessing the people telling you to change that don't know what they're talking about that seems to be the case great
so i mean the the good news in theory right i don't know to what extent this is going to happen but in theory um if you if they keep telling people you know that the slope of that line should be changing because you have ai now over time if we see the slope of that line not changing,
right?
Then theoretically, it will be proof that the AI is not providing the return that people expected.
Or you're not using it right.
Well, yes, there's always that you're not prompting it right.
That is, that is basically what I am people.
One of the many reasons what you want is like, I want to have people that actually code on to talk about this stuff because it's really easy as a layman myself and for others to just be like, oh, but this does replace coding, right?
And it does, it sounds like it really doesn't.
Like it can help.
It can be like a force multiplier to an extent, but even past the initial steps, it just isn't there.
Well, I mean, so the best analogy I've always found to writing code is actually just writing,
right?
I mean, you can get Chat GBT to spit out a few paragraphs for you, right?
But, you know, you end up with, you know, the legal briefs that have the story that's made up or the, you know, just things that aren't connected to reality or stuff that, you know, when people read them, they're like, I mean,
you've, you've, you can tell the difference between the AI slop generated, you know, like the, the stupid, um,
the, the insert from the Chicago Sun-Times and the Philadelphia Inquirer, you know, about all the, you know, all the things you can do this summer, right?
That like made up books and all that kind of, I mean, like, but even the, even the articles that weren't the ones that were making up stuff, you read the, you know, this is what's going to be happening this summer.
This is what the weather is going to be like or whatever.
And you're reading and you're like, this, this, there's no like insight here.
There's no thought here.
There's no, you know, there's nothing in here that I get to the end of this.
I've read the whole thing.
I understand the whole thing, but I don't have anything I can walk away with.
Right.
And AI agents aren't coming along to replace software developers.
You're not scared of Devin?
I am not scared of Devin.
So I, well, actually, I kind of am.
I am scared that Devin is going to make a mess of things and that more more things are going to get hacked and that's going to end up being worse for everybody on the internet.
How would it do that?
By, like we were talking about before, right?
So when you write code that isn't secure, right?
And you write code that uses a library that's got an old version of a thing that there's a known bug in it, but you don't bother to check to see if there's a fix for that bug, or you don't use best practices when it comes to writing code and that kind of thing.
Or you don't think about the kinds of maintainability issues that you're going to have.
And you do things like you ship out code
in an Internet of Things thing, a light bulb, right?
Or
an Internet Wi-Fi router that cannot be patched over the Internet that has a bug in it.
Right.
And now it's like that thing is going to have a bug in it forever.
And you're going to have to find all of the ones on the earth and turn them off before someone's not going to be able to take them and be able to hack them and use them to attack somebody else from there.
I mean, IoT is a huge problem.
Oh, yeah.
Like the cheaper ones have like the spyware stuff and crypto mining.
It just, but yeah, the ones, the ones that have, that have like really nasty vulnerabilities and they have no way of being updated once they leave the factory.
Right.
And it's just as long as they're out there, they're going to be a problem literally for everybody on the internet.
Jesus.
Well, what can
this to wrap us up?
What can a new engineer, someone new to software development, what can they learn right now?
You've kind of done a video on this, but I think it's a good place to wrap us up.
What can they start learning to actually get ahead, to actually prepare for all of this?
That's a really good question.
So you can't these days,
you can't really be able to be an engineer.
You can't get hired as an engineer without some ability to talk about being able to do prompts and use, you know, some kind of AI code editor or that kind of thing.
It's just an expectation of the job now.
Whether it should be or not, it's a different thing.
I mean, like I said before,
there are situations where you tell it what you want and it will type faster than you possibly can.
So, you know, that's not necessarily bad.
You need to understand that.
You need to figure out, well, okay, I'll get back to something else.
You need to figure out basically how to test the thing.
right so how do you make sure that the code that it spits out does what you meant it to do and what i'm expecting is that we're going to spend more time thinking about testing testing and thinking more about, you know, trying to find exceptions and that kind of thing than we have in the past, because the code that's actually being generated is going to be less likely to be quality than it was in the past.
The problem is
it has become the case in the programming industry that the things you need to do to get through the interview, to get hired, have very little resemblance to the things that you actually do on the job that you need to actually do a good job.
And so
that's a whole different, we could probably have a whole other podcast episode just about the interviewing problem.
But the, the main thing right now, it's so
right now, the whole hiring thing, and this isn't, I don't think, true for just programmers, but it's especially true for programmers,
is all, you know, bots that customize your resume and write a custom cover letter and then send them over to the.
the submit the thing to the bot that's screening the resume and screening the cover letter right and that is getting it to the point where you can actually talk to a human is a nightmare right now.
So the whole hiring system is kind of broken.
So the actually getting to the point where you can get hired is a nightmare at the moment.
But the thing that you can do is figure out what kinds of things that AI are good at, is good at.
And one of the things that AI is pretty good at is things that don't matter as much, right?
So like, you know, AI can pick the layout of a site potentially, right?
And you can have it pick two or three of them.
And you can basically do what's called an A-B test, and you can randomly assign people to it.
You can figure out which one of them performs better, and you can throw the rest of them away.
And even then, at some point, you will probably want the design customized.
Yeah, I mean, but
I think there will be a lot of things where
people can kind of get something that's kind of good enough to get started.
Right.
Right.
And I think that to some extent, this is going to be kind of a boon for the industry in the longer term, where somebody who can't program program right now, but who has some idea of kind of what they want, can do like a vide coding thing.
They can validate that the market that they want to try to attack exists, right?
And that people want to use the kind of thing that they built.
And then they can bring in somebody to actually build it right.
You know what I mean?
And those kinds of things wouldn't necessarily have been able to happen
in the complete absence of AI.
So it's not, I don't think, completely useless.
And there are times when as a developer, there are things that we're not good at, like, you know, writing marketing copy and that kind of stuff, that if we're trying to do a project for ourselves, you know, a lot of that stuff we can just outsource to the AI because it's not the thing that keeps the project from actually breaking and getting hacked and that kind of thing, right?
So it's kind of like there's this concept where you need to keep the things that are part of your competitive advantage in-house and everything else you can kind of outsource to somebody else.
The kinds of things you can outsource to somebody else are the kinds of things that you potentially you could throw an AI at because they're not corporate.
But even then, it's like, it doesn't seem like that's a ton of things right now or will be.
Again, it's the, so it's, it's basically two things.
It's things that were the, the, the quality of the thing doesn't matter really,
right?
Which every business has those kinds of things, right?
And they're the kinds of things where you can define a metric that you can test the AI against and let it try over and over and over and over and over again until it gets to the point where it's good enough.
Yeah.
Right.
So if your metric is more people click on this button than the button before, right?
Then you can have the AI create a whole bunch of different ways to skin that button.
Right.
And then you can say, okay, so the one that tested best is the one we're going to keep.
That's the thing you can throw an AI at, right?
Because you've got a well-defined way of checking and no telling how long it's going to take, but you have a well-defined way of checking to see if it's working right or not.
Yeah.
I mean, for years, I've had the theory that this industry was a $20 to $25 billion dollar total addressable market pretending to be a trillion dollar one and everything you're saying really suggests it's like you're describing things like platform as a service they like like things that you use in tandem with very real people and intentional ideas
yeah this is I don't see a world in which this is a
we replace all the humans you know the the whole like you know this is going to displace 80% of the white color workers in the world.
I just, you know,
the only people that are really going to be replaced anytime soon are people that either weren't doing a great job to start with or people whose bosses don't understand what they were doing to the point that the boss thought that what they were doing mattered.
And my guess is that there's going to be regret at that point and that at some point they're going to have to bring those people back.
Well, Carl, this has been such a wonderful conversation.
Where can people find you?
I am internetofbugs at YouTube.
It's probably the easiest place to find me.
And then there are links on that channel to point at other things.
And you've been listening to me, Ed Zitron.
You've been listening to Better Offline.
Thank you, everyone, for listening.
And yeah, we'll catch you next week.
Thank you for listening to Better Offline.
The editor and composer of the Better Offline theme song is Matasowski.
You can check out more of his music and audio projects at matasowski.com.
M-A-T-T-O-S-O-W-S-K-I dot com.
You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.
I also really recommend you go to chat.where'syoured.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.
Thank you so much for listening.
Better Offline is a production of CoolZone Media.
For more from CoolZone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.
There was the six-foot cartoon otter who came out from behind a curtain.
It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.
Should I be telling this thing all about my love life?
I think we will see a Twitch stream or a president maybe within our lifetimes.
You can find Close All Tabs wherever you listen to podcasts.
Who knew you could get all your favorite summer fruits and veggies in as fast as an hour with Walmart Express delivery.
Crisp peppers, juicy peaches, crunchy cucumbers and more at the same low prices you'd find in store.
And freshness is guaranteed.
If you don't love our produce, contact us for a full refund.
You're definitely going to need a bigger salad bowl.
Order now in the app.
The Walmart you thought you knew is now new.
Subject to availability, fees, and restriction supply.
Every business has an ambition.
PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.
And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.
When it's time to get growing, there's one platform for all business, PayPal Open.
Grow today at PayPalOpen.com.
Loan subject to approval in available locations.
Caesar Canine Cuisine asks, why does your dog spin?
Because he's excited I'm home?
Because he wants to play.
He spins because he wants a Caesar warm bowl.
Caesar Warm Bowls are microwavable meals for dogs.
Just set the timer for 10 seconds, and as the bowl spins in the microwave, so will your pup.
Caesar Warm Bowls are made with real chicken as the number one ingredient and fresh veggies with an irresistible aroma that gets dogs excited.
Look for Caesar Warm Bowls in the pet food aisle.
This is an iHeart podcast.