What Everyone Gets Wrong About the Future of AI with Nick Foster

54m
Futures designer Nick Foster spent decades helping tech companies create products many of us didn’t even know we wanted. As the head of design at Google X — a.k.a. Alphabet’s “Moonshot Factory,” which is now known simply as “X” — he led teams working on brain-controlled computer interfaces, intelligent robotics, and even neighborhood-level nuclear fusion. He also designed emerging technologies for Apple, Sony, Nokia and Dyson. But in his debut book, “Could, Should, Might, Don’t: How We Think About the Future,” Foster argues for a more measured approach to thinking about big disruptive technology, like A.I.

Kara and Nick talk about the pitfalls of the current AI hype cycle, why executives need to think critically about how everyday people are using AI, and how companies can more thoughtfully adopt the technology. They also talk about Foster’s argument that all of us need to take a more “mundane” approach to thinking about AI and the future.

This episode was recorded live at Smartsheet ENGAGE 2025 in Seattle.

Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Press play and read along

Runtime: 54m

Transcript

Speaker 1 I love your pants.

Speaker 2 That's very sweet of you.

Speaker 1 Yeah, I'm loving soft pants these days. Yeah, me too.
We need them.

Speaker 2 We need them.

Speaker 1 Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is On with Kara Swisher, and I'm Kara Swisher.
Today, I'm talking to futures designer Nick Foster.

Speaker 1 He spent decades of his career designing for huge tech companies like Apple, Sony, Nokia, and Dyson.

Speaker 1 Most recently, Foster was the head of design at Google X, Alphabet's RD company known as the Moonshot Factory.

Speaker 1 He led teams working on brain-controlled computer interfaces, intelligent robotics, even neighborhood-level nuclear fusion. Foster recently wrote his first book.

Speaker 1 It's called Could, Should, Might, Don't.

Speaker 1 And despite his big tech background, or maybe because of it, he argues for a more mundane approach to thinking about the future and how to design products for it.

Speaker 1 He wants all of us to treat transformative technology like AI as something we'll incorporate into our everyday lives rather than something that will radically change the way we live.

Speaker 1 I think Nick is right because it's really important that we stop thinking about AI as this hype machine or about the end of the earth and start to think about what it can do for us and what guardrails we need to put in place.

Speaker 1 All right, let's get into my interview with Nick Foster.

Speaker 1 Our expert question comes from Ethan Malick, a professor at Wharton School at the University of Pennsylvania and author of the book Co-Intelligence, Living and Working with AI.

Speaker 1 Today's episode is brought to you by Smartsheet, and my conversation with Nick was recorded in front of a live audience at Smartsheet's Engage conference in Seattle last week.

Speaker 3 Support for this episode comes from Smartsheet, the intelligent work management platform. Today's episode was recorded live at Engage, an annual conference for change makers hosted by Smartsheet.

Speaker 3 I was joined by Nick Foster to take a philosophical dive into how AI is set to transform business and what it might look like in the future.

Speaker 3 But beyond the prediction, Smartsheet is turning big ideas into practical, tangible solutions, helping leaders turn strategic vision into execution at scale.

Speaker 3 See how Smartsheet accelerates smarter outcomes for your business at smartsheet.com/slash box.

Speaker 1 So Nick, thanks for coming on.

Speaker 1 We're going to talk about how AI will reshape business and we'll use as a framework that you've developed in your book, Could, Should, Might, Don't, How We Think About the Future as our starting point.

Speaker 1 So let's just dive in.

Speaker 2 Okay, sounds good.

Speaker 1 You say most people are bad at thinking about the future and our collective inability to imagine what the future will actually be like, quote, might evolve into a definitive and crippling shortcoming in the years ahead.

Speaker 1 Talk about why people are bad at anticipating what's ahead and what makes that problematic.

Speaker 2 I've been in conversations about the future for my whole career. Right.
And people tend to talk about the future in very imbalanced ways.

Speaker 2 And I think that represents a critical shortcoming in the world that we live in, which is changing very quickly. And we're all starting to...

Speaker 2 I have this feeling that the desire to think about the future is here, high, and our ability to do so is low and underpowered. And I'm interested in trying to close that gap a little bit.

Speaker 1 Right, but why is that?

Speaker 2 Yeah, I think the future is obviously very contextual depending on who you are, where you live, and whether we're talking about a technology or community or society or culture or whatever else.

Speaker 2 Things move at different speeds. But I do think there's this general feeling that things are speeding up.
More change is happening now than perhaps it has done before.

Speaker 2 There's no real metric for that, but it does feel like that.

Speaker 2 So the population of Earth, for example, has doubled since I was born.

Speaker 2 Children could ride in cars cars with no seatbelts, and gay marriage wasn't legally recognized anywhere on Earth when I was born. So we've undoubtedly seen a lot of change.

Speaker 2 And I think our response is to try and talk more about the future.

Speaker 2 But the reason I think there's this gap, this disparity, this shortcoming, is I think that we tend to find ourselves in one of these four corners that I've described as could, should, might, and don't.

Speaker 1 Yeah. So you've been a futures designer for Google, Apple, Nokia, Dyson, and Sony.

Speaker 1 And you had a hand in helping technologies imagine and prototype emerging technologies are always leaning into the future.

Speaker 1 They have you know all the phrases everybody uses, you know, adaptability, flexibility, et cetera, et cetera.

Speaker 1 Talk about what that work is since most of us don't think about the future. And of course, they're very thirsty for the future at tech companies, almost irritatingly so.

Speaker 2 Yes, and just to be clear, the term futures designer is something I think I made up. Oh, you did? Okay.
I think so.

Speaker 2 But yeah, I've been, I trained as an industrial designer back in the day.

Speaker 2 I went to art school. I've been interested in designing things, physical things is where I started out working for James Dyson and things like that.

Speaker 2 But in the companies I've worked in, I've been explicitly focused on longer-term, more distant, more emergent, nascent technologies.

Speaker 2 And so the sort of genre of design work that I do, I call futures design.

Speaker 2 And so that looks like the sort of prototypical, sketchy, scrappy, V0.0 type of products that you might be familiar with if you've followed this world.

Speaker 2 So, yeah, that's the kind of work that I do just to try and kick something off and seeing if there's a there there and what that might look like.

Speaker 1 And when I talk about thirsty for it, but they really are, and sometimes almost performatively so, right?

Speaker 1 I mean, I recall when I was visiting early Google, they had all different kinds of office setups. They were testing different things, the pod, the sleeping pods, the different

Speaker 1 foods, different ways people ate.

Speaker 1 Is it easier working in a tech company like that? Because most companies have a look and feel,

Speaker 1 a physical look and feel, but in terms of pushing into future things, and can there be too much of it at the same time?

Speaker 2 I think you're right on the performative side of things. And if I'm being really frank, I think for the first part of my career, I did a lot of that stuff.

Speaker 2 Because I thought that's what you were supposed to do. That's what all the magazines showed that I was reading and all of the shows that I watched when talking about and imagining the future.

Speaker 2 This is what I call code futurism, this idea of the sort of bombastic, over-the-top, energetic, sci-fi-inflected work. I think I did quite a bit of that stuff.

Speaker 2 But as I've matured and I've become a bit more comfortable in my own skin and my own reckons,

Speaker 2 I've tried to be a bit more critical about that work. And since leaving Google in 2023,

Speaker 2 I've taken a bit of a reflection on that. And I do think that a lot of companies, they struggle with that question of what is the future.

Speaker 2 And they sort of throw money at it and they do projects, but it doesn't seem to lead anywhere.

Speaker 1 I also recall it Google. Remember the bicycles? Yes.

Speaker 1 They would have bicycles all over the place and they were multicolored there's always something weird happening there you'd always walk up and you're like oh god an elephant of course um and um i see a lot of those uh google bicycles repurposed in downtown oakland of course you do of course well sergey had a whole formula of people taking them um he said if i put a hundred they'll take 99.6 and then i'll put another hundred whatever um but one of the things I said, your bicycles are getting out of hand.

Speaker 1 And they're like, well, do you like them? I said, no, I'm thinking of driving my car through them because I hate them so much in some fashion. Anyway, I'm a little hostile at some point.

Speaker 1 But the way we think about the future is shaped by intellectual waves and cycles, as you noticed. And right now, obviously, we're in an applied AI wave, right? And that's the new thing.

Speaker 1 And, you know, they love the latest thing and they move on, whatever the phraseology is. But

Speaker 1 it's hard not to get carried away with it now. And I jokingly say everything has to be about AI, but it does.
Right now, it seems overwhelming. So how do we develop the capacity to recognize

Speaker 1 we're trapped inside of this right now and whether it makes any sense? Because one of the things I think about a lot is that everyone needs to calm down about what's happening.

Speaker 2 Yeah, I think that's right. And again, this comes with a bit of age.
I'm not quite 50, but I'll be 50 in February.

Speaker 2 And I think as I get to reflecting a bit more, I realize I've been through a few of these cycles of hype and excitement, particularly around technology, but around everything else.

Speaker 2 And the example that I use when talking to people about this is, you know, most cities around the world have an 80s night where they sort of wear the 80s neon clothes and they dance ironically to the music and they sort of make fun of it.

Speaker 2 We have to remember that those things sort of made sense at the time.

Speaker 2 But with a bit of hindsight and a bit of time, we start to learn, actually, those bits are a bit silly, those bits are a bit odd, those bits are a bit quirky. I think the same goes for technology.

Speaker 2 If you look back at 80s and 90s and 2000s ideas of the future of technology, they look a bit silly, they look a bit data.

Speaker 2 I mean, we could go long on this. I think the number of photographs I've seen of, say, early nascent VR or gestural computing mock-ups, for example, it didn't sort of end up like that.

Speaker 2 It doesn't mean to say that people shouldn't have looked and shouldn't have explored, but I guess what I'm saying in regards to AI is a more sort of pragmatic and rational and less hype-driven version of that, trying to think like what are we talking about now and how are we talking, and what language are we using now that might seem silly in the world?

Speaker 1 Well, the hype thing is, i mean remember the metaverse yeah no one wanted to live there no matter what mark zuckerberg did everybody likes legs i feel like everybody likes legs everybody likes legs yeah that was sort of the strangest design choice to have legless people floating around a universe that was antiseptic so right now what do you think is happening in this ai slurry that is problematic and what isn't

Speaker 2 uh that's a very i mean that's a deep question i think the challenge that I've got is, again, without wanting to sort of lean back heavily on the title of my book, I think people are falling into one of those four buckets.

Speaker 1 Right, I'm going to ask you about that in a second.

Speaker 2 Okay, go ahead. But I think that what I'm saying is that it leads to unbalanced and sort of biased versions of the future.

Speaker 2 And I think when looked at in aggregate across all of the different people that are talking about AI as a technology and what it might mean for society and what products we might make, you sort of get an aggregate view.

Speaker 2 But the chances are we're not listening to all of that.

Speaker 2 We're listening to one or we're listening to two or we're talking in one way and I think that means that we just head off down one road way too far and don't think about things in the round and something like AI which is a I prefer machine intelligence actually but we can

Speaker 1 script off the tongue but go ahead

Speaker 2 well I think there's nothing good about artificial things in the world no I think it's a stupid minute creep yeah rather than many of them letting them be themselves right yeah

Speaker 2 But I think that the challenge that we've got is trying to find a balance in the way that we talk about these things, which are, you know, the the technologies are transformative potentially in good ways and bad ways.

Speaker 2 And you talk about benefit and weapon or something like that. Yes, it's either a tool or a weapon.
Yeah, and I think that that definitely exists in this.

Speaker 2 And I think we need to find that level of sort of humility and balance in talking about these things and try to move forward with caution and apprehension, but also excitement and energy.

Speaker 1 It seems to me at this minute, and it's largely fueled by tech billionaires who are trying desperately to control it,

Speaker 1 that they're sort of trying to shove it down everybody's throat. Like, you really want this, you really want this spending, you don't know this, but you won't really want to eat this.

Speaker 1 And you don't, like, you actually don't.

Speaker 1 Let's go to your book, this could, should, might, don't.

Speaker 1 There are different mindsets that we fall into when we're talking about a lot of things, but AI and how it's going to impact the future of business.

Speaker 1 And the four words in the title of the book represent these approaches.

Speaker 1 Describe them very quickly. We'll get to one, each one specifically, but first explain how they interact and influence each other.

Speaker 2 Yeah, I'm just trying to identify these habits that I think we all fall into.

Speaker 2 So I've called them out as could, should, might, and don't, which is any conversation about the future I think tends to fall into one of these pockets.

Speaker 2 And so could is about being very sort of positive and excited about the future. It's often inflected by things like progress and technological progress.

Speaker 2 Should is about having some sort of certainty about the future, either an ideological destination or a prediction. And a lot of people ask me for predictions all the time, which I don't like making.

Speaker 2 Might is about uncertainty about the future, lots of scenarios, lots of potential outcomes.

Speaker 2 And then don't is about focusing on the things where we don't want to end up, where we would like to avoid, or the things we would like to change in the future.

Speaker 1 So we go down one of those paths rather than integrate them fully?

Speaker 2 I think so. And in my career, I've been fortunate.
I'm a designer, went to art school.

Speaker 2 I've been around designers, but fortunately, I've also been around investors and scientists and engineers and

Speaker 2 people from all different ilks.

Speaker 2 And I think that that each of the conversations I've had falls quite quickly into one of those pockets, either based on the kind of training that somebody's had or just the nature of the place that we're at.

Speaker 1 So let's apply them to AI. We'll start with could futurism.
You say that it's full of flashy and exciting ideas.

Speaker 1 Founders always fall into this category. And you warn it tends to emphasize hype over honest exploration and veers into empty calories.

Speaker 1 When it comes to AI, let's talk about the line between inspiring people people with a positive vision.

Speaker 3 You know, you're going to have a jet pack.

Speaker 1 They still have not delivered the friggin' jet pack, just FYI.

Speaker 1 And,

Speaker 1 you know, it's right after we're going to have a million optimist robots serving us. Yeah.
That's not happening. Although just the other day, Elon was talking about a floating car again.

Speaker 2 Okay, amazing.

Speaker 1 Yeah, it's not happening.

Speaker 1 So where's the line between positive vision of the future and misleading them? Because it's exhausting when you deal with founders, when they do this.

Speaker 1 Like with AI, you know, Sam Altman is perfectly fine.

Speaker 1 It's a low bar in tech, but he's perfectly fine.

Speaker 1 But, you know, he's always like, we're going to solve cancer. We're going to do this.
We're going to like, it's, you know, it's a dessert topping. It's a floor wax, if you recall that SNL joke.

Speaker 1 So talk about this could futurism. Is it necessary? to be ridiculous and dreamy, even though it's mostly marketing gobbledygook.

Speaker 2 Yeah, I think all four of these things that I've identified have their own benefits. And I think could futurism does, because you do need to sort of explore the potentiality of any new

Speaker 1 technology.

Speaker 2 Yeah, and getting people excited and motivated and driving them forward and saying these are all the things we could do. I think it's important.
It's good for motivation of a team.

Speaker 2 It's good to get people to see, ah, there's some upsides here.

Speaker 2 And, you know, the canonical examples are cures for cancer and other things that we see as sort of generally regarded big problems in the world.

Speaker 2 I think there's a way that AI and other technologies like that could address that. The challenge that I've got is it does quite quickly tip into fanciful, classical futurist tropes.

Speaker 2 And a lot of it is inflected by science fiction cinema. And a lot of people who meet me and find out what I've done or where I work or whatever, they think that I'm very enamored by science fiction.

Speaker 2 And I don't really watch it. I'm not really a fan of it.
I certainly don't take it as a brief, but I think that puts me in a minority.

Speaker 2 And actually, I think the challenge with that is a misreading of what the purpose of science fiction cinema really is, which is entertainment. Right.

Speaker 2 And this desire to will those MacGuffins, those things, those devices, those experiences into the world often takes Kud Futurism into this realm of, yeah, fantasy and sort of boyhood dream-like places, as opposed to.

Speaker 1 Which they're informed by, by the way. They're very deep into.
I mean, Musk is very deep into sci-fi. People don't realize.

Speaker 2 It also affects the ways that they name things. So the Falcon rockets that Musk has are named after the Millennium Falcon.
Right. You know, Cortana is named after a character in Halo.
Like

Speaker 2 the meeting rooms at Google X were all named after sci-fi robots.

Speaker 1 Yes, I know. You know, so these are sorts of...
Netscape was my favorite. They named rooms after diseases of the skin,

Speaker 1 which I appreciated.

Speaker 2 Yeah, and it might seem like a small, harmless thing or a kind of, but what it does is it sets the tone for what the company...

Speaker 2 is interested in and the people that work there start to talk in these ways and I've like I said I've been in so many meetings where ideas from science fiction cinema get brought up.

Speaker 2 And I think what it actually represents is

Speaker 2 a crisis of imagination.

Speaker 2 They grab at these little placeholders that they saw in a movie or a TV show and put it in the room. It's like, this is what we're doing.

Speaker 1 I think that does come to pass. I mean, Star Trek informed a lot of these people initially.
And so the communicators, all the stuff, of course, look like what we have now.

Speaker 1 There is a link to taking some of the things, correct?

Speaker 2 I think so, but without wanting to get too into the lore of it, I think

Speaker 2 the challenge that I've got is influence is one thing. Being influenced by something you saw is one thing, but saying that the science fiction predicted those things is just a falsehood.

Speaker 2 And I think that also

Speaker 2 for something like the Star Tack flip phone, everyone said that's the Star Trek.

Speaker 2 If you actually look into the history of that,

Speaker 2 mobile communications was around long before things like Star Trek.

Speaker 2 Maybe the form factor was slightly

Speaker 2 influential in the development of

Speaker 2 the StarTack flip phone. That's hard to say.

Speaker 2 But I think when we sort of muddy those things up, and people are willing to overlook the 100,000 false predictions in science fiction for that one moment when somebody held up a glass rectangle and went, oh, iPad.

Speaker 2 It feels like we have a habit of focusing in on the thing that was closest to the bullseye, but ignoring all of the millions of other things that were never so.

Speaker 1 Right, right. But it's a way to inspire yourself, right?

Speaker 2 Yeah, I think the problem with it as well is it assumes that everyone else has seen the same movies and read the same books.

Speaker 2 And I think that that can be an exclusive or it excludes, you know, excludes a lot of people from the conversation

Speaker 2 who aren't along with the memes or along with the language or along with the future.

Speaker 1 Or you don't want that future. You don't want that future.

Speaker 1 So let's talk about should futurism. This is a confident, action-oriented mindset that uses logic and numbers to predict what comes next.

Speaker 1 And you often see it in the C-suite, but you write that corporate strategy can be little more than intuition backed by data. I agree with you here.
So far, ROI and AI is limited, to say the least.

Speaker 1 In fact, we're in a negative territory here, but we're in the middle of this crazy arms race. And so, what approaches should business leaders use to navigate that?

Speaker 2 Yeah, so should futurism is just defined by some form of certainty about the future. And I think that can come from two places.

Speaker 2 One is a sort of ideological position, either from your religions or the state of morality that you'd like to see in the world, and we should point towards that. That's the world we should build.

Speaker 2 I think that's sort of off to the side of what we're really talking about.

Speaker 2 The should futurism that I see played out a lot in business is this notion of observing the world through numeric practices, creating models of how we think the world works, and then the sort of the temptation to project those models out into the future becomes almost irresistible and people do that.

Speaker 2 But the challenge that I have with should futurism is that once that solid line turns to a dotted line, it ceases to be data and

Speaker 2 it becomes a story. Right.
And I call it numeric fiction. You know, my job as a designer is to make things and make movies and make prototypes.

Speaker 2 And I consider those stories about a future that we might produce, or we could produce, or we should, or whatever.

Speaker 2 But I think the confidence that we attribute to numeric fiction and algorithms and data-driven futures is way overblown when placed back against reality.

Speaker 2 So, I think a little, there's this phrase that a lot of people with MBAs like to use, which is skating to where the puck is going to be, the Gretzky quote. And it sort of might work for ice skating.

Speaker 2 I'm not a big ice hockey fan.

Speaker 2 But it sort of rejects the fact that the world is just an inherently volatile, unsafe,

Speaker 2 complex, and ambiguous place.

Speaker 2 And a lot of the things that we're trying to measure now, particularly things involving humans, are just naturally chaotic.

Speaker 2 So any kind of dotted line striding confidently out into the future is a story, and we need to treat them a bit more like that.

Speaker 1 Yeah, the phrase I use is frequently wrong, but never in doubt.

Speaker 1 Like they're often wrong kind of stuff, or else just lying. Sometimes it's just flat out lying.

Speaker 1 But so when you have a should future, it's not, again, it's not bad to imagine, but they base certainty on rather than just we're going to just do this because we make this design choice.

Speaker 1 I'm thinking of, you know, Steve Jobs, he took things off of the computer and he took off one of the things that you put in the side and everyone lost their mind.

Speaker 1 And I said, well, why did you do this? He goes, I just didn't like it. Like, it was a great way to make a decision.
He didn't like it.

Speaker 2 yeah and he goes i have no data i just don't want it there i just decided and he goes people can like it or not or use my products or not i don't care and it was kind of like oh all right that makes sense but he didn't base it on we should keep it there because of this and that i think that i think that the over-reliance on data and feedback um to make your decisions just becomes crippling really quickly and it can totally freeze your product whatever it is because you sort of do a little test you get a bit of negative feedback because people usually refer to react to change in sort of of negative ways, whatever it is.

Speaker 2 And so you don't do it. And your product just becomes ossified and stuck in ice.
And sometimes you do need to make that

Speaker 2 idea that veers away from that dotted line of where the data says we're going.

Speaker 1 Or in a should thing. It was another encounter I had one time with Bill Gates when the iPod came out, the iPod.

Speaker 1 I was

Speaker 1 showing to him it was starting to gain some traction. And he said, what is it? It's trivial.

Speaker 1 It's a white box with

Speaker 1 a hard drive in it. That's how he described it.
It's a white box with a hard drive in it. It's trivial.
And I said, if it's so easy, why didn't you think of it?

Speaker 2 Yeah. I mean, you're talking to somebody that used to work at Nokia.
Yeah. So, like, I've been around that kind of confidence.

Speaker 2 And our data says this, they'll never, the market for this, the price is too high, the whatever, you know. Yeah.
And sure enough, we know, ask Blockbuster, ask Kodak.

Speaker 2 You know, these are the companies that had very confident dotted lines striding off the bottom.

Speaker 1 Yeah, he did the same thing with Google. What is it? Sure.
box on a white page. I was like, what's your problem with white pages and white boxes and stuff like that? And he said it's easy.

Speaker 1 And I'm like, again, the kids seem to like it.

Speaker 1 We'll be back in a minute.

Speaker 3 Support for this episode comes from Smartsheet, the intelligent work management platform. If you've attended this year's Engage conference, you're able to see this episode live in front of a crowd.

Speaker 3 But beyond the show, you're also able to check out how Smartsheet is leveraging AI to accelerate the velocity of work.

Speaker 3 Even with all the talk about disrupting the status quo and new technology available to us, business is still business.

Speaker 3 And that means the world's largest enterprises are still looking for a competitive edge, the thing that separates them from the rest.

Speaker 3 But with AI, it's no longer about working harder, it's about working smarter and faster.

Speaker 3 It's one of the reasons business leaders call Smartsheet the intelligent work management platform and why it's trusted by 85% of the Fortune 500.

Speaker 3 Smartsheet can help your organization move faster, adapt with confidence, and drive smarter business outcomes.

Speaker 3 This is the platform that helps you turn big picture strategy into a tangible roadmap for profound impact. See how your business can lead with intelligent work at Smartsheet.com slash Vox.

Speaker 1 Let's talk about Might Futurism. It presents itself as a reasonable adult in the the room.

Speaker 1 It lays out possibilities, calculates the probabilities, and you see it as think tanks, lobbyists, government agencies.

Speaker 1 When it comes to AI, we'll call for global summits, publish frameworks and voluntary guidelines. Netflix just published some around ethical AI use of it.

Speaker 1 Where could that go wrong, and what do they miss?

Speaker 2 I mean, the way I define might futurism is sort of the opposite of should futurism, which is looking at the future as a huge landscape of probability and possibility, Sort of from the Rand days of the Cold War scenario planning and game theory, like when we're playing chess, you make a move and you figure out, oh, well, we could do that and we could do that, we could do that.

Speaker 2 So it becomes this huge terrain of multiple stories. And I think on a commercial level, it probably represents best in class futures work.

Speaker 2 And if you were to hire a strategic foresight partner, that's the kind of work they would do. Ingest just tons of data and build lots of this might happen, this might happen.

Speaker 2 We think this is more likely, we think this is less likely. So that terrain of mites about the future is what I define here.
The challenges with these ways of thinking are many.

Speaker 2 The first is that no matter how much data you pull in or how many opinions or how many weak signals you draw on, you'll never have it all. Right.

Speaker 2 More importantly than that, your adversaries, particularly in things like the Cold War, they might be deliberately deploying false data to throw your scenarios off course.

Speaker 2 So the competitive nature of future scenario planning becomes a problem. And I think that's sort of where I stand with might futurism.

Speaker 2 I think, like I said, if you were to hire a company to do that kind of work, you'd get, that's the kind of work that you would get.

Speaker 2 But it does sort of lack and it has the same sort of confidence, like we've seen this whole terrain.

Speaker 2 And it can just get very complex very, very quickly and not lead you to a kind of decision, just lots and lots of lots.

Speaker 1 It could also lock you into constant analysis, right?

Speaker 2 Analysis, paralysis. Analysis, paralysis.

Speaker 2 And again, for every, when you just think, right, we've got the 50 scenarios in front of us, somebody will walk in and say, teens in Korea or something, You're like, oh, now we need to do another 10.

Speaker 2 So it just becomes this self-perpetuating mess of potential futures.

Speaker 1 It doesn't lead you to a decision.

Speaker 2 And it makes making a decision that much harder.

Speaker 1 So finally, there's don't futurism. You call it, quote, unwelcome guests in communities built on optimism, positivity, and forward momentum.
These are doomers, critics, activists.

Speaker 1 And you quote one of my favorite philosophers, Paul Virilio, who wrote, when you invent the ship, you also invent the shipwreck.

Speaker 1 We definitely need some don't futurism in the AI conversation,

Speaker 1 but it can lead to the dystopian scenarios, which come rather fast and furious. So how do business leaders create space for nuance and meaningful dissent without falling into the catastrophizing trap?

Speaker 2 So the fourth corner that I call don't is that place of looking to the future and the futures that you don't like or you don't want to end up in.

Speaker 2 And again, religions and science fiction cinema does a lot of work in dystopia. Religions have places like hell and purgatory to try and steer you in the present away from undesirable futures.

Speaker 2 I think we do have that in AI and in technology writ large, but it tends to be extrinsic. It tends to be from a position of critique.
It doesn't happen enough within organizations.

Speaker 2 And I think we're starting to see a slight shift in that, I think, in some sectors, where people are starting to understand the negative externalities of the things that they're creating.

Speaker 2 And they're starting to understand the second and third order implications of the things they're building.

Speaker 2 But I think that the challenge with don't futurism is if you spend too long in that space or you only do it, it can become crippling, particularly for we're seeing this a lot in our young people who have this term ambient adolescent apocalypticism, which is just being surrounded entirely by bad portrayals of the future or terrifying portrayals of the future that just become crippling.

Speaker 2 And it makes it really hard to sort of be hopeful or excited about that.

Speaker 1 I think at the same time, tech often never has enough.

Speaker 1 I mean, it's called consequences or adult behavior right that you can say wow if i if i do this then this and it's interesting because they tend to lack any of that one of the things i wrote about in my book was i was in a meeting about facebook live and they you know they show it to reporters before and i said okay you know and they it's always like you know, some fun cat doing something.

Speaker 1 Like some example, in that case, it was the chewbaka mom. Remember her?

Speaker 1 And so I said, well, what happens if someone bullies on this? What happens if someone commits suicide?

Speaker 1 What happens if someone beats someone up or uses it or worst case scenario, straps a GoPro on their head and starts shooting people?

Speaker 1 And the whole room looked at me and one of the tech people were like, you're a bummer.

Speaker 1 And I went, yeah, humanity's a fucking bummer. So they don't do enough of that.

Speaker 2 I think you're right. And I think part of my job when I was working inside big tech companies was to do that.

Speaker 2 that kind of work with people or at least try and encourage them to think about if you put this out in the world, people will use it in these ways.

Speaker 2 and you either need to mitigate that or be at least aware of it

Speaker 2 and I think that the strange thing with AI is it feels like the public or the consumers or the the the the the the real world is ahead of the technology in that regard.

Speaker 2 There is a lot of doubt and skepticism and a lot of dubious opinions about what is this and do I actually want it and I feel bad about it.

Speaker 2 And I think that the people producing these technologies have to come up to meet that.

Speaker 2 And again, as part of this sort of balance of thinking about the future from a could, should, might, and don't perspective,

Speaker 2 you need that sort of balance from all four corners.

Speaker 1 Right. So

Speaker 1 these are the four different mindsets that people bring to analyzing the possibilities around AI. Let's get to your expert question.
Every episode, we have an expert send us a question for our guests.

Speaker 1 Let's hear yours.

Speaker 2 Hello, I'm Ethan Malek, a professor at Wharton and author of the book CoIntelligence.

Speaker 2 And my big question is that with AI, for the first time, we actually have a truly unimaginable future that seems to be the mainline forecast of the various AI labs.

Speaker 2 They think they'll be a machine smarter than a human at every intellectual task in the next five years, maybe 10 at the outside, which would have very large changes to how we work, live, do science, and everything else.

Speaker 2 So the question is, how do we start thinking about the potential for an unimaginable future when we have trouble even articulating what that is?

Speaker 2 So the question is, how do we start thinking about the potential for an unimaginable future when we have trouble even articulating what that is? Yeah.

Speaker 2 Yeah, I think it's a, I mean, that's a very hard question to answer, obviously why it's been asked. I think the

Speaker 2 challenge that we've got in articulating the future at the moment is that the present is so volatile.

Speaker 2 There's that Gibson quote that well-formed ideas about the future are difficult because we have insufficient now to stand on. And it does feel like that.

Speaker 2 And in the conversations I've been having around this book,

Speaker 2 it does feel like there's this sort of instability around us. And my argument to that is like, well, should we just stop then?

Speaker 2 I think that means we need to do more thinking about the future.

Speaker 2 And it doesn't mean to say we have to get it right, which is why I shy away from things like projections or predictions or prognostications about the future. I just think doing the work is important.

Speaker 2 Sitting people down and finding space in the daily life to have that conversation about what are we building, where is it going, what don't we know? What do we need it for? What do we need it for?

Speaker 2 What kind of world might we leave behind?

Speaker 2 You know, I think that that level of understanding and that level of respect for for thinking about and talking about and doing work about the future just doesn't exist almost anywhere.

Speaker 1 Does that get worse with the frantic nature of the tech people in terms of spending that they're doing? The talent stuff? It seems demented at this point.

Speaker 1 And I think most regular people feel that or intuitively feel that, but always go, well, they're the rich people, they must know. And I'm always like, they don't know.
They don't actually know.

Speaker 2 I think you've got a strong take on this, that they are rich, but they're not geniuses.

Speaker 2 There's this sort of position. Yeah, of course.

Speaker 1 That's their talent is money.

Speaker 2 Yes, and convincing folks. Yeah.
And I think that that exists, and we have to be aware of that.

Speaker 2 But I think the reason I don't call myself a futurist is because I think a lot of people that do call themselves futurists or show up on stages at events and give talks.

Speaker 2 There's a lot of snake oil stuff there. Yes.
And I feel uncomfortable with it, so I don't want to be part of that cohort. So I think those people's work needs to get better.

Speaker 2 And when I say better, I mean more balanced, more honest, more open, more rigorous. But I don't think that will happen naturally because the audiences don't push back enough.

Speaker 2 They don't raise their hands enough. They don't maybe feel confident enough to say, wait, hang on a second.
You've been saying this for 10 years. Where is it?

Speaker 2 Or what makes you so sure in that projection?

Speaker 2 I would love the reason I wrote this book. I could have written a book.
for other futures practitioners or an academic book.

Speaker 2 I've written a broad appeal book because I'm actually encouraging people to say,

Speaker 2 just a second here. You have a job to raise your hand and say, this isn't good enough.
Tell me more. Right.
Or it doesn't work for me. Or it doesn't work for me.

Speaker 1 It's meant to overwhelm and

Speaker 1 stupefy you, I think, in a lot of ways.

Speaker 1 Ethan asked about artificial general intelligence or AGI. As a designer, what features do you want to see in an AGI beyond the obvious goal of aligning it with human well-being?

Speaker 1 If AGI really does arrive, what would it need to get right about human values and behavior? to actually make our lives better and not just more automated and distracted.

Speaker 2 Without wanting to get into a taxonomic hole here.

Speaker 2 Also, have you noticed how many people in Silicon Valley are now really interested in philosophy and the nature of humanity and suddenly experts on Descartes and things like this?

Speaker 2 I have. Just fascinating.

Speaker 1 They never took the courses or read the books. Sure.

Speaker 1 That does not stop them.

Speaker 2 So to that point,

Speaker 2 I'm interested in machine intelligence as opposed to artificial intelligence. And I think AGI is a...

Speaker 2 is a sort of weird totem that sits out in the future whose definition is constantly evolving depending on what books people have read that week or what definition or benchmark somebody's given to it.

Speaker 2 So I think it's a sort of, it's a fool's errand, is that the right term to say we're aiming towards that thing? Because the question is, what is that thing? And then what happens the day after?

Speaker 2 And I think that it sort of doesn't take us anywhere useful.

Speaker 2 And I think the idea of separating machine intelligence from human or mammalian intelligences, they are fundamentally different in many ways.

Speaker 2 You know, human intelligence is bound together with experience and mortality and hormones and beliefs and all of those other things that these systems don't have.

Speaker 2 So I think treating them as their own thing allows them to be able to do that.

Speaker 1 Synthetic.

Speaker 1 Synthetic is the word I would use.

Speaker 2 Synthetic. There we go.

Speaker 1 Synthetic. I think they're going for God.

Speaker 2 They're going. They are.

Speaker 1 And I have a weird theory that a lot of the people doing this right now, a lot of the, most, many of them are men or most. And

Speaker 1 I have this theory they can't have children like women can, and this is their version of pregnant. Like this is

Speaker 2 think about it. I'm thinking, I don't want to think about it too much.

Speaker 1 I know, but this is it.

Speaker 2 But I also think if you, if you say that your business is creating gods, you can register your company as a religion, and therefore, you become get some good tax breaks, too.

Speaker 1 Correct, exactly. And many of them are becoming more, which is really bizarre in a wild way.

Speaker 1 So, let's talk about more specific about how it affects business because most people have to like deal with it on a daily basis and overspend and spend things they don't need to, or being getting forced on you.

Speaker 1 And often, when business people ask ask me, what should I do? I go, sit still, don't do anything.

Speaker 1 And the phrase a lot of people in tech use are NPCs, which is from video games, non-playable characters and video games.

Speaker 1 You flip that formulation on its head and said we should think about the future. We should actually focus on NPCs, regular people, how they use technology.

Speaker 1 I think NPCs is actually a very good way because

Speaker 1 you all don't count. Like, it does, they're not thinking of you.
So, how should business approach designing AI AI for the average person, an NPC?

Speaker 2 Yeah, I've got a long lecture I can give on this, which I won't.

Speaker 2 But when I was a junior designer, I did a lot of the bombastic escapist sci-fi-inflected futures work because I thought that's what you were supposed to do.

Speaker 2 Then I would go home to my parents' sort of damp Midlands house, go to the pub with my dad and just think, what? What was that I was doing in London at the weekend, during the week?

Speaker 2 It doesn't make sense. And I think that it just started to not reflect my own experience of the world.

Speaker 2 And so I leant into this way of thinking that I call the future mundane, which is a way of thinking about either NPCs or background talent or extras.

Speaker 2 I think in the opera, they're called supernumeraries or something. But the people that exist in the background of scenes are the people that are also going to be existing with this technology.

Speaker 2 So I love to think about technology in...

Speaker 2 a sort of a mass adopted, ordinary, everyday part of people's lives. And I think that helps ground the conversations and it helps you lean on things.

Speaker 1 It's not thought of, actually. It's not thought about who people are going to use it.
It's just trying to sell you something before you know what you need it for.

Speaker 2 Yeah.

Speaker 1 And snake oil.

Speaker 2 We have a habit of talking about the future again as some other place occupied by other people, probably more heroic people than us.

Speaker 2 But whenever you see something like a future device or a new gadget or whatever, it's really helpful to start to think about it less about, you know, you always see these videos of somebody using VR to like fix a heart or build a city, but actually people just watch YouTube and play games on them.

Speaker 2 They do. So think about it in somebody's backpack on a bus in a wet city.
You know, Seattle is where we are today.

Speaker 2 Like think about it in those terms and suddenly it grounds everything and it normalizes things and starts to say, actually, I have a hundred questions now and stops it being a fantasy land.

Speaker 1 So does that create a situation for business people feeling enormous pressure?

Speaker 1 to buy, buy, buy before they know what the actual, this is something I say, I'm like, don't buy it until you know what you want to use it for. Use it to see how it works

Speaker 1 and then ask questions. But it's often foisted on you in this inevitability kind of thing.

Speaker 2 Yeah, there's the FOMO side of it, which every company feels like they need to put on their slides, like we're doing AI, we're doing AI.

Speaker 2 And I've seen a hundred people on LinkedIn put AI designer now in their bios and whatever. It feels like you have to do it.

Speaker 2 And it's because we're in a sort of whatever you want to call it, a Cambrian explosion of technology or whatever. It is.

Speaker 2 I think we need to play with these things and we need to explore them in environments that we're comfortable with.

Speaker 2 Not reject them and not say no, but just play with them, start to make sense of them and say, actually, there's something here there that either me or my team or my company or my society or my culture could benefit from.

Speaker 2 And then start to, you know, make the big investments and run towards these technologies.

Speaker 1 The trillions of dollars they're spending here is really breathtaking in many ways.

Speaker 1 Your work shows that humans are messy and designing intelligent systems that can handle a lot of things a worker doesn't

Speaker 1 probably very complex. Talk about what gets lost when machines replace people.

Speaker 2 I mean, that's a good one. I think

Speaker 2 there's a really weird thing going on at the moment: that people look at the world as it is and thinks that's what it is. And then they apply AI to it as some sort of admin and fetimin.

Speaker 2 Just do what we used to do, but just faster and more efficient. But I don't think that's actually where we're going to end up.

Speaker 2 I think that's where we'll start because people see the world and they have their list of problems that they want to answer or address.

Speaker 2 And they say, oh, AI can do that in half the time or half the cost or twice as often, whatever it matters to you.

Speaker 2 The challenge for me is what new things come after that, what new jobs, what new ways of working, where creativity plays into all of this.

Speaker 2 I think that isn't what's being talked about enough. And I think we need to have that conversation.

Speaker 1 Meaning, we don't know.

Speaker 2 We don't know.

Speaker 1 Right, exactly. I mean, one of the things I always say is, could you have imagined Uber when they invented the internet, when internet started to become commercialized? No.
Right.

Speaker 1 Nobody, or when the iPhone came out, could you have imagined Airbnb? Could you have imagined? Someone did.

Speaker 2 The example that I tend to use is the guitar amplifier, actually, because it takes it away from sort of Silicon Valley tech.

Speaker 2 And when the guitar amplifier was created it was it was designed to reproduce the sound of a guitar or a voice louder obviously what came with that was distortion which is a bit like the artifacts we see with ai and a lot of engineers said that's a bad thing we need to engineer that out right but a lot of creative people saw that and heard that and said there's something interesting and lo and behold they lent into it and said this is actually something new and it's a different thing yeah it wasn't trying to reproduce and so you end up with grunge and rock and roll and heavy metal and you know that birthed a whole new industry and a whole new art form.

Speaker 2 So I'm less interested in saying we have this problem, let's throw AI on it. We can do it twice as faster with half as many people.

Speaker 2 I'm more interested in saying the world is a place filled with people with lots of things we're trying to achieve. We have this new set of capabilities.
What new things might it birth?

Speaker 1 It's also because they hate the word friction. They find friction to be offensive.
They're going to make this seamless for you, whether it's AI relationships. We'll give you a seamless relationship.

Speaker 1 We'll give you one that's not a problem. We'll give you answers that are easy, that kind of thing.

Speaker 1 And I think one of the things I'm pushing back on is friction is what makes everything interesting, right? And distortion is friction.

Speaker 2 Yes. Right.

Speaker 1 So

Speaker 1 that's a really big concept because they're constantly pushing. We're going to make it convenient, seamless.
And I am always, anytime they do that, I'm like, you know, sex is friction.

Speaker 1 Thank you.

Speaker 1 But then I think, oh, wait, they now have chatbot girlfriends. So, well, that's the end of that.
So,

Speaker 1 but you can see how easy it is to fall into the frictionless environment.

Speaker 2 Yeah, it comes from a mindset of viewing the world as a series of problems to be solved. And I've always found a problem with the term solution.

Speaker 2 We always have this, like companies name themselves so-and-so solutions or whatever. And I think it just, it's a very reductive way of looking at the world.

Speaker 2 So we can take something like the transition from petrol cars to electric cars, and we say, oh, that's a problem solved.

Speaker 2 Far from it, particularly if you're a nine-year-old boy being forced down a hole in Congo to dig up lithium or cobalt for these batteries.

Speaker 2 A problem and a solution only exists as far as you're willing to look.

Speaker 2 And I think thinking about the depth of implications about all of the things that we're bringing about on the world and the second and the third order implications of the things we're bringing about on the world is where responsible companies start to flourish.

Speaker 2 It's actually understanding that friction and complexity is part of business. You can't just streamline something and say point A to point B.

Speaker 2 There's joy in the mess and there's also responsibility in the mess too.

Speaker 1 So, talk about the unintended consequences of rapid AI adoption, besides getting it wrong. I'm thinking of something like expanding a highway, for example.

Speaker 1 People think more lanes will ease traffic, but research shows it actually makes it worse.

Speaker 1 So, conventional wisdom says AI will lead to job losses and potentially less work for those who have jobs, but maybe the increased capacity leads to greater output, more growth, more jobs.

Speaker 1 I know you have an aversion to making predictions, but

Speaker 1 as you look around, what are the counterintuitive repercussions you can see?

Speaker 2 Yeah, aside from the fact that we don't really know what the labor market will look like in 20 years because of the introduction of these technologies.

Speaker 2 I think one of the mistakes, again, we make is the idea of thinking of these technologies as somehow compressive. Like there's a funny sort of academic term, compressive or donative technologies.

Speaker 2 So people go rowing for fun, even though an outboard motor is more efficient because it's donative act.

Speaker 2 I think one of the things that we're we're struggling with at the moment is thinking of things like AI as a compressive tool.

Speaker 2 And I think it's the same mistakes we made with labor-saving devices in the home in the 30s, 40s, 50s, and 60s.

Speaker 2 And the dream was that it would emancipate people, particularly women, from hard labor in the home.

Speaker 2 And all it did was just increase the expectations on women to do more things with the time that they had. They weren't off playing golf or having dinner parties, as the advert said.

Speaker 2 And I think we need to think about that more. It's like, if we're going to compress all of this stuff down and make it simpler, we don't get the afternoon off.

Speaker 2 The expectation is then that we produce twice as much.

Speaker 1 We'll be back in a minute.

Speaker 3 Support for this show comes from Smartsheet, the intelligent work management platform. It's no surprise there's a lot of talk about AI in business these days.

Speaker 3 I mean, you're listening to an episode right now where I talk about that very subject. We're all scrambling to figure out how AI can help our businesses.

Speaker 3 Just like any tool, you'll need to find the AI that's right for your needs. It's about precision, and that's what Smartsheet is all about.

Speaker 3 If you got to attend the Engage conference where we recorded this episode, then in between the podcast taping and the food, you also got to see Smartsheet unveil an entire suite of new AI-powered capabilities designed for enterprise-level use.

Speaker 3 It's a suite purpose-built to accelerate the velocity of work.

Speaker 3 And when you combine that with enterprise-grade security and governance, Smartsheet is known for, it means you're looking at a solve for scaling responsibly and scaling confidently.

Speaker 3 Smartsheet is bringing people, data, and AI together in a single system of execution, and the results can mean big things for your company.

Speaker 3 Discover how Smartsheet is helping businesses lead with intelligent work at Smartsheet.com slash Box.

Speaker 1 Now, Google, Meta, and Microsoft have reported record spending on on AI. It's astonishing the amount of money they're spending.
It's clear, at least, and right now it's supporting the U.S.

Speaker 1 stock market, which is dangerous, to say the least.

Speaker 1 But could and should futurism, AI futurism is winning. But if we're in an AI bubble and it bursts, how do technologists who believe the promise value create momentum for their ideas?

Speaker 1 Because we've seen multiple AI winters before.

Speaker 1 AI is not new. So if the bubble bursts, does that mean the don't futurist narrative takes over for some time?

Speaker 2 I think it gives it oxygen for sure to say, you know, we were right, we should have been more concerned about these things.

Speaker 2 I think the distribution of this work is something I know you're fond of talking about.

Speaker 2 The distribution of where this work is happening and by whom and what their motivations are is a question that we need to have more broadly.

Speaker 2 And the concentration of influence, because of the amount of money and the amount of resource it takes to build these models, that needs addressing. I don't necessarily know.

Speaker 1 At the beginning of the internet, as I said, it was inexpensive for innovators to create a website, create businesses. And in this case, only the large companies can do it.

Speaker 1 And therefore, it will be far less innovative if a homogeneous group of seven companies and with a very non-diverse group of people create everything. We're going to get the same

Speaker 1 chicken dinner from all of them. Like, you're not going to get innovation out of large companies making decisions for the rest of us.
It just seems logical.

Speaker 2 And yes, and they're mostly motivated by achieving the same sorts of goals too. Right.

Speaker 2 And I think that's the challenge is like, how do we, how do we build these systems that, by the way that they're built right now, require this huge amount of capital and resources to get them to energy resources and everything else.

Speaker 2 How do we take that and somehow, I won't say democratize, but offer an alternative path that allow smaller companies, different organizations.

Speaker 1 Because they're all the same. All the LLMs are the same.
I'm sorry. Like they're building the same thing.
together and it's just one of them will win. It's sort of like Highlander, the movie.

Speaker 1 There can be only one at some point. Who's got the biggest sword? Who's got the biggest? Well, it's money.
That's what it is. And it could be money well spent and the winner will benefit from it.

Speaker 1 Everybody else will be the loser, which doesn't make at least from a resource perspective, it's idiotic.

Speaker 1 So you write about the importance of resisting extremes, the uncritical hype of could futurism and the paralysis of don't futurism can lead to dead ends, both of them, in different ways, because people will either be disappointed when you don't get the dreamy future, or you could feel like you can't paralysis.

Speaker 1 So what does a healthy relationship with our AI-enabled future look like for businesses and people here who are making these decisions?

Speaker 2 Yeah, I think finding a way to encourage everyone you meet and all of your teams to have conversations about the longer-term implications and the longer term futures that we're interested in.

Speaker 2 And encouraging people to think about things in the round rather than just getting trapped into one of these corridors of thinking.

Speaker 2 It's very easy to say we should do this because the data says so or we could do this. I'm very excited about it or we don't do this because it's scary.

Speaker 2 But I do think a well-rounded AI strategy, for want of a better term, is one that incorporates all of that and also incorporates the views of everybody in the organization who's building it.

Speaker 2 I do have a problem with this sort of othering of futures work and labifying of thinking about the future. You're here and you make the products, we're over here and we're thinking about the future.

Speaker 2 Because I've been in those environments for 25 years. And I think that there's sometimes a necessity to doing that for secrecy, for privacy, for lack of distraction.

Speaker 2 But I think it does other futures work in an unhealthy way and it stops it being sort of integrated into the way of thinking.

Speaker 2 When you're in this sort of environment where the world is changing as quickly as it is with these new technologies that have huge capabilities, I think it is everybody's responsibility, not just job, it's everyone's responsibility to start thinking in longer-term ways.

Speaker 2 Start thinking beyond the quarterly returns, start thinking beyond the one-year, even two-year, you know, start to run.

Speaker 1 Well, it's hard though when your CEO is like, AI, we've got to do it.

Speaker 2 Sure. Well, AI, we've got to do it.
Fine.

Speaker 2 That's the sort of thing that CEOs say. But then what after that? Let's get into the detail.
Let's talk about what we're actually going to do or excited about or fearful of.

Speaker 1 But it has become very hard to say no, though, correct?

Speaker 2 I mean, I would think it's mostly FOMO at the moment. Is a CEO feels like they have to stand up and say.

Speaker 2 AI, AI, AI. Because if they don't, the company's like everyone else that I go to dinner with is talking about AI.
Why aren't we talking about it?

Speaker 2 But I think that represents the experimental phase that we're talking about. I would hope that a smart CEO would say, we should look at this.
It has the potential for big transformation.

Speaker 2 I think those could look like this. Let's explore that.

Speaker 2 But if it turns out not to be true, in five years, just like moving to a different, whatever, server base or whatever it is that we thought was going to be a big thing that ended up not being, we might see a lot of companies going, do you know what?

Speaker 2 LLMs, not for us. It's not really how our business works.
Doesn't make financial sense. Customers don't want it.
We might move away from it.

Speaker 2 So I think at the moment we are in that space where it feels like you have to have some skin in the game and explore and experiment just to see if it's for you.

Speaker 2 But it doesn't mean to say we have to just grab the tiller of the boat and point it entirely over here. Yeah.

Speaker 1 Yeah. I had a bunch of people like, you've got to get into this AI podcast.
And I was like, yeah, I'll pass. And they were like,

Speaker 1 I'm not doing it. And they're like, well, you should try it.
I'm like, yeah.

Speaker 2 I mean, the other option is to wait until it matures and then see if it's

Speaker 2 to be part of the experiment.

Speaker 1 Right, exactly. It seems like a giant waste of time and life is too short.
But

Speaker 1 ultimately, you you want us to identify a set of narratives we fall into when we talk about the future so we can think about it more rigorously.

Speaker 1 And that means thinking about all the mundane ways, because the boring stuff is where things actually happen, which how we'll interact with technology in the future instead of this sci-fi, you know, we're all going to be wearing shoes that make us float, that kind of thing.

Speaker 1 I always use the example of electricity. Nobody today thought about electricity.
No one went, oh, I'm on the electrical grid today as I turn on the light. You don't think of it.

Speaker 1 It becomes, to me, the most successful technologies are invisible.

Speaker 2 Yes.

Speaker 1 How do you, that's how I look at it. So what do you see for AI going forward?

Speaker 2 That sounds remarkably like asking me for a prediction, but we'll go with it.

Speaker 2 I think it's, like you say, when we stop talking about it, when it just becomes part and parcel, when it becomes embedded in the software that we're using, when we're truly honest about telling stories about the future.

Speaker 2 I use the example of ABS. When I was a kid, ABS braking on cars was a big thing.
And cars had little Chrome ABS badges on the back and now it's just sort of standard.

Speaker 2 It's not even listed and we certainly don't see ABS badges. So I think getting past the badging phase of AI and just saying that's how computers work now.

Speaker 2 They work in a slightly different way and it gives you all these other things. I think that we need to be honest about what people really do with their software, which we're not.

Speaker 2 And then once we're honest about it, we need to see it sort of disappearing into the background and stop shouting about it so much and stop branding everything AI.

Speaker 1 Well, their market share numbers depend on it. That's why they're doing it.
Of course.

Speaker 1 Last question. What's the most interesting deployment of AI you've seen of existing technology?

Speaker 2 I mean, I come at this from an art school background. I've mentioned art school twice as some sort of caveat for my answers, but that's where I come from.

Speaker 2 Some friends of mine have been training AI models on images of spoons. And they've spooked spoons, you know, that we eat with every day.

Speaker 2 And created a huge training set of images of spoons and then asked AI to create three new spoons. And you know, that's sort of interesting.

Speaker 2 And they were kind of weird and distorted and asymmetrical and they had weird blobs because of all the distortionary forces I was talking about.

Speaker 2 But what's lovely is they actually made them and they produced them and they found a silversmith in Italy to produce these spoons. And for me that really epitomizes that

Speaker 2 that thing I was talking about about focusing on what's different about this rather than just allowing us to make the perfect spoon faster or quicker.

Speaker 2 It allows us to think differently as a creative partner as a way to sort of stimulate new directions of thought.

Speaker 2 And it's a very simple thing, and I'm sure the market share is zero, but it's interesting to me. And I think that is this little peek into the world.
To rethink something.

Speaker 2 To rethink something and to push back. Computers haven't really pushed back on us very much.
They've been very kind of servile, but now

Speaker 2 they're in this sort of negotiation phase of computing, which I find really interesting. So, yes, a small group of designers who've been exploring.

Speaker 1 Was it a better spoon?

Speaker 2 Defined better. I have a trouble with it.
Was it a better spoon? Well, you could still drink spoons.

Speaker 1 Spoons work pretty well.

Speaker 2 Spoons work great. I'm not a huge fan of spoon innovation, but you could still drink soup with it.
Yeah.

Speaker 1 Thank you, Nick, so much for your time, and thank you, everybody.

Speaker 1 Today's show was produced by Christian Castor-Rousselle, Kateri Yoakum, Michelle Aloy, Megan Burney, and Kaylin Lynch. Nishat Kurwa is Vox Media's executive producer of Podcasts.

Speaker 1 Special thanks to Annika Robbins. Our engineers are Fernando Aruda and Rick Kwan, and our theme music is by Trackademics.
If you're already following the show, you're an NPC.

Speaker 1 If not, you are banished to the metaverse without legs. Go wherever you listen to podcasts, search for On with Kara Swisher, and hit follow.

Speaker 1 Thanks for listening to On with Kara Swisher from Podium Media, New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Thursday with more.

Speaker 3 Thank you to Smartsheet for supporting this episode. Today's conversation about how AI will transform business was more than just philosophical.

Speaker 3 It reflected the challenges that IT and business leaders are facing in their day-to-day right now.

Speaker 3 Smartsheet offers a purpose-built platform that unites people, data, and AI. And so you not only get work done, you accelerate the velocity of work itself.
This isn't just about being efficient.

Speaker 3 It's about moving business forward with speed and precision. It's about making sure your team is working smarter.
Find out more at smartsheet.com/slash box. That's smartsheet.com/slash box.