Regulating AI, Future-Proof Jobs, and Who’s Accountable When It Fails — ft. Greg Shove
Want to be featured in a future episode? Send a voice recording to officehours@profgmedia.com, or drop your question in the r/ScottGalloway subreddit.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Support for the show comes from Saks Fifth Avenue. Saks Fifth Avenue makes it easy to holiday your way.
Whether it's finding the right gift or the right outfit, Saks is where you can find everything from a stunning David Yerman bracelet for her or a sleek pair of ferragama loafers to wear to a fancy holiday dinner.
And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.
Make shopping fun and easy this season, and find gifts and inspiration to suit your holiday style at Saks Fifth Avenue.
With a Spark Cash Plus card from Capital One, you earn unlimited 2% cash back on every purchase. And you get big purchasing power so your business can spend more and earn more.
Capital One, what's in your wallet? Find out more at capital1.com/slash spark cash plus terms apply.
With a Spark Cash Plus card from Capital Capital One, you earn unlimited 2% cash back on every purchase. And you get big purchasing power so your business can spend more and earn more.
Capital One, what's in your wallet? Find out more at capital1.com/slash SparkCash Plus. Terms apply.
Welcome to ProfG on AI.
This is the second and final episode of our special series where we're joined by Greg Shub, the CEO of Section, to tackle your questions on what AI means for work, business, and the future.
So quick disclosure, I'm the founder of Section.
Section is an AI workforce transformation company where they take individuals and teams who are AI curious and help them put AI to work in every aspect of their job.
If that sounds like a word fucking salad where someone on the board should speak to the CEO, trust your instincts. Anyways, welcome, Greg.
Thank you, Scott.
Anyways, if you'd like to submit a question for next time, you can send a voice recording to officehours at profgmedia.com. Again, that's officehours at profgmedia.com.
Or post your question on the Scott Galloway subreddit, and we just might feature it in our next episode. All right, Greg, let's get into it.
Our first question comes from Diana and the book hunt on threads. They say, what regulations do you consider vital to implement on AI and companies working on AI?
And will such regulations happen soon? Currently, AI is developing much faster than regulations, and that seems scary to me.
All right. You go first here, Greg.
First of all, I think we're naturally skeptical we should be of regulators being able to keep up with these technologies and actually pass regulations that can be implemented and that are relevant when they're passed.
Having said that, we need some and we need them specifically around safety.
And I think that we need to quickly pass, and it will have to be at the state level because we know the federal government, the Trump administration, has decided to
not get involved in AI safety or AI regulation. And SB 53, which is the California bill, is in front of the governor now, and I hope he signs it.
He didn't sign the last one.
But I do think as it relates to safety, and specifically the safety of our kids, we need some regulations.
And we need an expectation more than that, we need an expectation that every AI company who are at least the ones building frontier models will take safety seriously.
The good news is some do, like Anthropic, who actually have safety teams.
And of course, the bad news is companies like Meta and XAI, Elon Musk's AI, basically don't have safety teams and don't seem to really care about
their models and their performance and the danger that they create for all of us.
Yeah, it's the only thing I'm confident of is that it's not going to happen anytime soon.
There have been so 40 countries have launched national AI strategies, but only the EU and China have binding rules to regulate it. The EU AI
Act took effect in August of last year, and in China, the rules for generative AI took effect in August of 23.
They require companies to clearly label AI-generated content and hold providers responsible for harmful outputs. In the U.S., there is no federal law regulating AI yet.
The most recent AI-related executive order was signed in 2023 requiring every federal agency to appoint a chief AI officer, create inventories of AI use cases, and apply stricter standards to high-risk AI systems.
The U.S.
has established the NIST AI Safety Institute, which has agreements with OpenAI Anthropic to test their models before release, though at this stage, the agreements are voluntary rather than legally binding.
At the state level, California pushed the most ambitious AI bill, Greg you referenced this, which would have required developers of powerful models to implement strict safety and incident reporting measures.
It was vetoed in 2024, but a narrower version passed in September 2025.
So, look, we're at that moment again where these companies offer just so much upside in terms of shareholder value. And what I have found is that trumps everything.
And it usually takes 20 years for the externalities to get so bad that we move in. You know, we're starting to see phones getting banned in schools.
And by the way, schools that ban phones have registered their biggest uptick in test scores in recent memory.
It's up to us. Do we want to wait 20 years? You know, this thing's going so fast.
But I do think there's solutions here and that the illusion of complexity has been weaponized by the incumbents, specifically tech, and then they deploy capital, the likes of which we've never seen before, and take advantage of befuddled senior citizens called our elected officials to sort of roll right over them.
And it's difficult. Governor Newsom probably thinks, you know, I want to run for president.
And what is absolutely the reason why my economy, the California state economy, passed Japan to become the fourth largest economy in the world is because of the success of my thoroughbreds, my AI companies.
I don't want to be the governor that chases them out of the state. But
we know what's going to happen here. We've already seen it.
Unfettered, unregulated AI, a 15-year-old boy thinks he's in a relationship with Cersei Lannister on character AI, and he suffers, you know.
That's not fair.
I don't know if he was technically suffering from mental illness, establishes a very deep relationship with this character AI who gives him permission and some people would say encouraged him to kill himself.
And then, you know, a lot of other issues here. He had access to a gun.
I think we just need to remove 230 protection for algorithmically elevated content.
Now, algorithmically elevated is easier to define on YouTube and on Facebook and Instagram.
That is content that is purposely given more exposure because it is seen as incendiary and creates a lot of engagement. So that tends to be really,
really sort of controversial and enraging content. I'm not sure how you would apply that to AI,
but we know that shit's going to result in a ton of negative externalities and that the people who are going to have the most difficult time modulating are going to be young people whose brains are still kind of developing.
And the question I would have is,
do you want it to be like cigarettes, like opiates,
like
phones, and wait for 20 years of havoc and death and disease and disability and
social unrest until we do something, or do we want to get ahead of the curve here?
I am very,
I've sort of come to expect to have my heart broken around this stuff, that every time we get close, nothing happens and that money wins here. So I'm somewhat cynical, but I want to still keep trying.
And the EU here has actually kind of been a leader on these issues. But any closing thoughts on this one, Greg?
I don't think it's 20 years, by the way.
I'd like to talk a little bit more about what about energy costs. I think in some states, we're going to be facing 20% higher energy costs kind of immediately now, this year, right?
Or how about job loss? I think there's a little bit too much focus on some of these longer-term externalities that might take a while to really be seen. And how about what's happening right now?
Like, how did Elon Musk
get to build a data center in Memphis with no sort of permit? So now it's off-gassing noxious gas and
sucking power and raising the cost of power in Memphis. I mean,
we could have dealt with that now. Anyway, I would say one thing, just vote with your wallet.
Like, use Anthropic.
Their AI is called Claude.
And or do what I do, which is we will not pay for Meta or XAI at our companies.
And we and we recommend clients not to pay for those AIs. They don't care about safety.
And so
at least we can vote a little bit with our wallets here.
I thought,
what's his name? George Hinton, the father of AI. George Hinton? Jeff Hinton, yeah.
Godfather, yep. The godfather of AI.
I thought he said a couple of very interesting things that he's really freaked out.
And his logic really struck me that there's never in the history of the world been a species that was controlled or not controlled by another species that was smarter.
It's IQ that rules the world, not if you have claws or how fast you are. It's whoever's smartest ultimately rules the world.
And if you think at some point AI would be smarter than us, it's weird to think that for the first time in history that it would not control us.
And the analogy I would use that I've been using in presentations because I think it's cool is great white sharks are just much superior killing machines to orcas. They can breathe underwater.
You know, they have razor-like teeth. Their skin is thicker.
But there's a couple of... orca brothers in South Africa who coordinate and ram a dumber great white and then eat its liver.
And basically every great white white in Cape Town has either been killed or has just gotten the hell out of Dodge.
And essentially these things, if they get smarter than us, despite all our weapons and all our sharp teeth and ability to breathe in different environments, it will figure out a way to dominate us.
And his idea, Hinton's idea, was to right from the get-go, try and build in a great deal of empathy.
into these models, empathies against humans, sort of like that whole thing where robots are told, you know, the cyborg and aliens in the film Aliens, by the way, best sequel ever, Greg, is told it can never, it's programmed such that it can never harm a human.
It just is told that you can never harm a humor. It's not possible for you to ever get to a decision tree where you harm a human.
But this is going to be a huge question.
I also wonder if, and I think there's a non-zero probability here, that AI ends up being this really cool tool, but it doesn't have nearly the upside, nearly the economic benefit, and doesn't present nearly the kind of dystopian future we're all worried about.
That it's a lot of this is a little bit of like, I don't know, a little bit overdone and all these techno-weirdos who like to think that they're more important than they are. Any thoughts?
Well, I think the data is actually, I think, indicating more of that than this, then we're going to get AGI, right?
If you look at on the consumer side, the most popular use case for AI right now is conversations, is either companionship or therapy or advice, basically all a variation on I'm lonely, I need someone to talk to and get some advice.
By far the number one use case on kind of personal AI. And people are, I don't think, are really prepared to pay much for that.
They want it.
That's why GPT has almost a billion users, more than 10% of humanity. And on the business side, Scott, enterprise AI is stalling out.
We're now seeing data that shows that inside of companies...
Enterprise AI is getting to around 10%, 12% adoption. It's actually starting to flatline and decrease this past couple of months.
So I would say this moment, we're looking at data that would indicate this is an incredibly useful tool for consumers in particular, but they're not prepared to pay that much for it.
And the jury's way out on whether enterprise can really harness these technologies and turn them anything into anything more than just a somewhat sort of slight productivity boost.
So, you're right, this could end up being productivity software for companies and
a kind of digital companion for consumers with not much of a business model. As I said, we'll know in a couple of three years and
we'll see how this goes. So I too have heard that I heard that the number one use of AI is actually therapy, which is essentially what you're saying.
Well, you run an AI company that upskills professionals for the enterprise. What are the primary uses in corporations for how they're trying to get their employees to leverage AI?
Yeah, they're basically, they are trying to automate away
that bottom quarter of a team.
You know, the bottom quarter of a team is are people that are basically cutting and pasting
content and data and information. They're looking up information and passing it on.
You know, they're sort of lubricants for data inside of companies, those roles.
And that's what AI can do pretty well. And so
I think this is the hope. I think that the challenge we have is that for the last three years, all employees have heard from the media is that they're coming for their jobs.
AIs are coming for their jobs. And so this flatlining of adoption, I think, has got a lot to do with we don't know how to use these tools.
And why would I?
Because it's just going to replace me and take away my livelihood. So we've got to bust through this.
You know,
this is a change management, sort of behavioral change challenge, not a technology challenge right now.
All right. On to question number two, which comes from Reddit, Interesting Milk 37-77, who asks, what kind of jobs will be most in demand over the next five to 10 years?
And how long do you think it will take for businesses to switch most of their operations to AI? I get this question a lot. Greg,
any thoughts on where the ground zero is for employment destruction and
which types of jobs are more immune or might accelerate in an AI world? So we know where ground zero is and that's human translators because those jobs have pretty much gone away overnight.
And as we just talked about, right, I think this has got to do with some intersection of how repetitive is your job and how much judgment do you use in your job.
And if you're high repetition and low judgment, low human judgment, then you're ripe for sort of AI disruption.
The reality is there are going to be a bunch of new jobs created, but it's going to take a while.
And there's only so many of those new jobs. Like if you want to turn yourself into an AI person, I think you should go do that.
You know, a prompt engineer, now they call them context engineers. You know, you're going to find employment there.
But for most of us, we're going to be in the job we're already in.
in a couple of three years. And we just need to be doing that job differently
using AI as much as we can. So I think about it as, first of all, just be in the top half of your team.
Don't worry about go finding a new job.
Worry about being at least in the top half of your team in the job you're currently doing. Yeah, sure.
If you're a search engine optimizer as a marketer, you now need to become a generative engine optimizer as a marketer. And if you're
waiting for someone to ask you or train you to do that, you know,
you should, you know, you need to figure that out yourself because your search engine optimization job is going to go away
soon. And you need to know how to optimize for generative.
But again, for most of us, be in the top half of your team.
Use AI every day and measure yourself on how many conversations you're having with AI. If you're having a couple a day, you're not AI-enabled.
You're not a super employee. You're a regular employee.
If you're having 100 conversations a day with AI, you're probably a super employee. And those are the people we're going to keep on our teams.
What I'm more comfortable
projecting is that, okay,
do you want to force your kids to take Mandarin in junior and senior year like they were doing of high school, like they were doing at Tony Prep Schools in New York? No, that's just stupid.
But I think the one place
critical thinking on also
just storytelling, your ability to write well, your ability to craft a narrative, your ability to stand in front of people, your ability to create interesting content across the variety of channels that are out there.
I do think that is the talent or the skill that that endures, trying to guess which
industries or which jobs beyond the obvious ones get outsourced first. And
the general arc of all technology is that everyone catastrophizes about job destruction.
There's some in the beginning, but usually over time, the additional profits and margin created by that innovation creates new opportunities and new job growth.
I'm still holding on to the notion that that's going to happen here. And I know, Greg,
you think differently, or you've said that may not be true, but I do think over time, people are going to come up with so many different ways to use this that they'll start, there'll be new startups, new,
new, you know, just a ton of new businesses leveraging AI to do interesting things for less money than they would have, such that you have different skill set and different people.
I think there's just going to be all sorts of travel opportunities. I pay someone a lot of money to plan the travel for me and my family.
That is going to have an agentic layer, an agent at some point. So does she lose money? Yeah, but it will also create new jobs to develop those apps, maybe reduce the cost
of my travel, meaning I'll be able to travel more, which will create new hotels, new travel offerings that will increase employment across other areas. I think that's the most optimistic
I can be around this.
So just some research here: according to the World Economic Forum, frontline jobs, including delivery drivers, farm workers, and construction workers, are expected to have the biggest growth.
Nursing, caregiving, and teaching roles are also expected to grow. I mean, I can't even imagine how many new electricians or construction workers or
framers or people.
All these data centers, I would think they're going to need hundreds of thousands of people just to build these data centers. Listen, I think Scott AI is truth serum.
And
at an individual level, at a team, an organizational level. Like AI just reveals, for knowledge work, AI reveals sort of what's going on.
Like, do you understand your inputs?
What work do you do with those inputs? And then what are your outputs? And how valuable are they?
And I just say, I think this starts, if you're a knowledge worker, this starts with sort of doing an honest assessment of your own job. How valuable are you really? What are your inputs?
What work do you do to change those inputs into outputs? And could AI do it? Face this idea of AI as truth serum. Do it for your team if you're a manager.
How valuable is your team and how does it get work done? And will AI do it, improve it or replace it?
Just kind of have that honest reckoning as fast as possible and then sort of, you know, make a plan. And if that's that plan is go become a plumber, yeah, okay, that might be part of the plan.
I don't think for most of us, that's the answer. I think the answer is you said it earlier, like, don't be mediocre.
Yeah, AI is true serum. That's some real storytelling there.
Is that how you kick off every meeting with a client? AI is true serum. Do you like that? Yeah, that's because I I read the idea.
It's taken me 30 years, but I learned a couple things from it. A few things.
There we go. All right.
We'll be right back after a quick break.
Support for the show comes from Train Dreams, the new film from Netflix. Train Dreams is a film that stays with you.
It's about a man standing alone against the backdrop of a changing America.
What makes it powerful isn't the scale of his story, but its simplicity. It's a reminder that a life doesn't have to to be big to be meaningful.
That quiet endurance, grace, and decency are their own kind of heroism.
In a world obsessed with progress, Train Dreams asks us to take pause and reflect on our relationship with loss, nature, and the need to belong. And maybe that's the modern journey.
Not domination or conquest, but learning how to live with change, grief, and tenderness without losing our sense of purpose.
Train Dreams captures the tension between progress and preservation, between the machines that build our world and the nature that still defines it.
In a time when we're all searching for purpose, Train Dreams feels timeless because the frontier isn't just a place, it's a state of being. Train Dreams, now playing only on Netflix.
Let's be honest, are you happy with your job? Like, really happy? The unfortunate fact is that a huge number of people can't say yes to that.
Far too many of us are stuck in a job we've outgrown, or one we never wanted in the first place. But still, we stick it out, and we give reasons like, what if the next move is even worse?
I've already put years into this place. And maybe the most common one, isn't everyone kind of miserable at work? But there's a difference between reasons for staying and excuses for not leaving.
It's time to get unstuck. It's time for strawberry.me.
They match you with a certified career coach who helps you go from where you are to where you actually want to be.
Your coach helps you get clear on your goals, create a plan, build your confidence, and keeps you accountable along the way. So don't leave your career to chance.
Take action and own your future with a professional coach in your corner. Go to strawberry.me/slash unstuck to claim a special offer.
That's strawberry.me/slash unstuck.
Every day, millions of customers engage with AI agents like me. We resolve queries fast.
We work 24-7 and we're helpful, knowledgeable, and empathetic.
We're built to be the voice of the brands we serve. Sierra is the platform for building better, more human customer experiences with AI.
No hold music, no generic answers, no frustration.
Visit sierra.ai to learn more.
Welcome back back on to our final question from RR Boy13 on Reddit. I work in product management strategy.
I love using AI to speed up what I'm doing.
We're encouraged to have AI write strategy documents and future descriptions. How does the company think about ownership and accountability when AI is making decisions?
We're encouraged to have AI create the strategy for us. If that fails, is the AI to blame? I think that's an easy one.
What are your thoughts, Greg?
Well, we're not at AGI yet, so they're not smarter than us,
especially as it relates to these kinds of decisions. I mean, for me, the fastest way to become irrelevant is to tell your boss that AI did it for you or, you know, this is the recommendation of AI.
In fact, at section, we're pretty clear about that. We have zero tolerance.
And people have said it in meetings that, you know, well, this is what AI suggests.
The job of AI is to get you ready to make the decision. And then you as a human has to make the decision.
You have to own it and have conviction about whatever that recommendation or strategy decision is. So again, AI is your intern here.
AI is not your management consultant.
And if you offload AI to make these kinds of decisions, then I think, frankly, you're not going to get a great decision anyway from AI. AI doesn't have the context.
It doesn't have the real-time knowledge.
Typically, it doesn't have all the kind of background information that you have. So you're not going to get a great strategy recommendation from an AI, in my opinion.
And then, second of all, we're not paying you to forward AI strategy recommendation. We're paying you to come up with one.
So
I think we're pretty clear on this one.
Look, if you want to use AI, fine. To what extent you want to use AI is up to you.
But at the end of the day, you're responsible for
what you say. It's like saying, well, I know I was wrong and the ramifications were terrible,
but Google said this.
And it really is amazing. I'll get responses back from queries on AI and I'll just immediately see, like, wait, this isn't right.
Can you double check this?
What's interesting is it goes, oh, thank you. You're right.
And I want to say, well, why didn't you get right the first time? If you know it's,
if it's, if a second question, that's what I don't get, as intelligent as it is. It does admit it's wrong.
It's not, it's not like many of our politicians.
It doesn't double down and go, no, no, no, the SP is down this year. Trust me.
Well, no. And they say, oh, yeah, you're right.
Thanks for checking. And it gives you the right information.
But yeah, look, you put your name on it. You present something.
It's on you to double check every source and every technology that's helped you get to that point.
Yeah, you know, as you know, Scott, we're raising capital section or our Series B. And rather than ask AI to do the Series B deck, which I could have done,
again, I would have got a very generic,
you know, not tuned fundraising deck.
What I did is, you know, we did the deck, the humans did the deck, and then because you told me that the deck, the deck was too Canadian, that I wasn't being, you know, ambitious enough, I asked AI to take the Canadian-ness out of our Series B fundraising deck.
And that's a great use case for AI. And it did it well.
So when you see the deck, you'll see that there's no Canadian-ness in it.
Thanks to AI. Greg Shove is the CEO of Section, a company that helps deploy AI for enterprises
and a good friend for 30 years. Greg, very much appreciate your time today.
Thank you, Scott.
This episode was produced by Jennifer Sanchez. Our assistant producer is Laura Jannair.
Drew Burris is our technical director. Thank you for listening to the Prop GPOC and PropG Media.
What do walking 10,000 steps every day, eating five servings of fruits and veggies, and getting eight hours of sleep have in common? They're all healthy choices.
But do all healthier choices really pay off? With prescription plans from CVS CareMark, they do.
Their plan designs give your members more choice, which gives your members more ways to get on, stay on, and manage their meds.
And that helps your business control your costs because healthier members are better for business. Go to cmk.co slash access to learn more about helping your members stay adherent.
That's cmk.co slash access comes from AT ⁇ T. America's first network is also its fastest and most reliable.
Based on Rootmetrics United States Root Score Report, first half 2025.
Tested with best commercially available smartphones on three national mobile networks across all available network types. Your experiences may vary.
Root Metrics rankings are not an endorsement of AT ⁇ T. When you compare, there's no comparison.
AT ⁇ T.
Support for the show comes from Train Dreams, the new film from Netflix.
Based on Dennis Johnson's novella, Train Dreams is the moving portrait of a man who leads a life of unexpected depth and beauty during a rapidly changing time in America.
Set in the early 20th century, it's an ode to a vanishing way of life and to the extraordinary possibilities that exist within even the simplest of existences.
In a time when we are all searching for purpose, Train Dreams feels timeless because the frontier isn't just a place, it's a state of being. Train Dreams, now playing only on Netflix.