How A 2-Week AI Agent Launch Enabled 67% Productivity Gains

1h 8m
What if your company could launch its first AI agent in just two weeks?

Listen and follow along

Transcript

The philosophy with AI is assume everything's gonna change.

We're all obsessed with OpEx and headcount reduction and cost savings, but that's not the point of what we're trying to do here with AI, right?

The real value is how do you make your existing people more effective?

You guys launched your first AI agent in less than two weeks.

Since then, you've reduced average handle time by 15%.

You're on track to save $2 million a year from this.

And you've increased sales rep efficiency by 67%.

The things that I would have assumed were going to be impossible earlier in my career or even a year ago, now possible in a few clicks.

Leadership teams that are going to thrive are the ones that continue to ask those types of questions.

Why can't I do something different?

CSAT scores are above 90% right now, which is extremely high for this industry.

I think it's simple.

It's stop planning and start doing, right?

And let's be honest.

In six months of planning, the tech is going to change 10 times over that your planning is going to continue to be wrong, right?

AI is a growth enabler, not a cost-cutting exercise.

So we've got to change that perspective.

Wow, mic drop.

That was amazing.

Even with the best AI and tech in the world, if people don't trust it, you're not going to get the adoption.

So, Rose, we've been on several meetings recently where we have like what I want to call like an AI brainstorming session.

Totally.

Where we're with a whole team.

And our team's pretty small.

For those listening who don't know, Mission Network's a pretty small team.

So we've got like five or six voices on a call and we're just brainstorming.

Oh, we could use AI to do this.

We could make an AI agent that does that.

Oh, we need to document our entire production workflow and figure out where we can automate things and use AI to solve all these problems.

And we just create like a list of a hundred different ideas and then we don't do any of them.

Yeah.

Or we are all, all of us are like, okay, I'm going to test this.

You test this.

They're going to test this.

And then we all get excited about conflicting tools or we get excited about one particular tool,

but it starts to not feel like it meshes with the

tasks that we would need it to execute.

Yeah.

I mean, there's like this, there's this mix of this problem, right?

Where we've got this huge list of ideas that almost paralyzes us.

And then we've got distraction from all these new tools that are popping up that it's like, oh, go try this, go try that, go try this, go try that.

And I think there's a place in every organization to be testing out new tools, looking for new vendors to work with, all kinds of stuff like that.

But I see so many businesses just get stuck in this freeze mode with the AI conversation because of this brainstorming problem: like, my imagination opens the door to every possible idea.

And yet, that just kind of stops us from doing anything, which is why I loved our conversation with Molly Bodensteiner today, who is the SVP of operations at Engine.

She said something that I'm going to like paint on my wall, which is ruthless prioritization.

So picking one thing and sticking with it ruthlessly, which I know, like, again, Rose, we're a small company.

So like, if this is a problem for us and our tiny little version that we are, I can't imagine how much of a brainstorming imagination problem there is with companies that are 100 people, 1,000 people, 10,000 employees, 50,000 employees.

Like totally, it's got me so many ideas.

How long that task list probably gets and how overwhelming that would feel, I can't imagine.

Yeah.

And if you're someone who's trying to spearhead, let's say an agentic AI solution and you're like, oh, this is what we're going to do.

Like what Molly did, she started with cancellations.

So, you know, engine.engine.com is a work travel company.

They help you book travel for work and monitor it and see how spending is going and all that kind of stuff.

So if you're like, I want to build an agentic AI tool for cancellations, but then you've got a thousand other employees that are like, oh, but you could do this and you could add this on there and you could add that on there.

Molly said, no, shush, we're not doing any of those things yet.

We're just going to do this one tool.

And I feel like that set the tone for like our entire conversation with her about how to think about AI implementation.

And she just gave so many great tips of like the first tip of is ruthless prioritization.

of just like really honing in on that one thing that you're going to do.

And their ability to do that meant that they were able to move really quick and they rolled out their first agentic AI solution in just 14 days.

That is so crazy.

I feel like we've had so many conversations that are super cool, super high-level, but they're not as practical.

And it doesn't feel as

it feels more open-ended.

It feels like something that's not really a closed loop.

It feels like this theory, this idea.

But what Molly brings to the table is like

question, answer, problem, solution.

It's really refreshing in terms of the whole AI conversation because it, like you said, it can just feel vast and scary and confusing.

Yeah, I mean, there's, there are so many opportunities.

So it's kind of like, which use case do you start with?

How do you decide which one to say yes to, which one to say no to?

Okay, now we've rolled out this first step.

How soon afterwards am I adding on new use cases?

Okay, oh, we made this agentic AI solution.

Do I build right on top of that one?

Or should I

figure out a modular solution to this where like we're building in parallel to these different solutions rather than everything being combined into one giant solution.

So there's a lot of different ways that you can think about your agendaic AI rollout.

And Molly basically was like, no, don't do that.

Don't do this.

Don't do that.

Do this.

Here's my mistake that I made.

Here's the failure that we had.

Here's the success that we had.

These are the results that we had.

And it's just like a true playbook and masterclass in how to think about rolling out AI.

Like, I just, I loved her opinions.

I loved the way she explained everything.

And she was just so clear and precise with what to do, what to act on, what not to act on.

I totally agree.

We talked so much about change management, how not to curate a Frankenstack, which was a term that I learned today.

Yeah, I mean, I think it can be really hard as a business to decide which tool to use, right?

So that Frankenstack word is this idea that like you've got 20 different tools trying to like band-aid solutions together.

So as a company, how do you avoid this problem, especially as tools roll out all the time?

How do you avoid the issue of we've now invested in 25 different AI solutions and now they're all like not working together super effectively?

So she talks a lot about like choosing the right partner for AI implementation, how to identify vendors, when to bet on things, when not to bet on things.

And just ultimately, yeah, like how to think about building a tech stack in today's AI world when things are constantly changing.

There was something she mentioned about how tech is table stakes, right?

So like most of these solutions are 95% similar.

So it's not so much tech that's going to differentiate you as a company.

It's actually the people in your company.

Like, do I trust you as a partner?

Do I want to work with you?

Do I like you?

Do we get along?

Would I grab beer with you after our conversation, right?

Totally.

So I thought that was a really interesting point about how, like, as technology becomes more similar to each other, as AI makes it easier for us to develop solutions.

It's not so much about the solution itself that I care about.

It's about the people that I get to work with.

Well, Lacey, I feel like we should probably introduce ourselves.

I'm your host, Lacey Peace, and you're listening to Experts of Experience.

And I'm I'm Rose Shocker.

I produce Experts of Experience.

And you're about to tune into an episode with Molly Budensteiner, the SVP of operations at Engine.

But before you do, please hit that like, that subscribe button, drop me a comment.

Let me know who you want to hear on the show.

what questions I should be asking.

Let us know what brands have been impressing you lately.

Like, let us know who's impressing you, what customer experiences you've had recently as a consumer that you think we should shout out.

And without further ado, here is Molly Budensteiner.

Molly, welcome to Experts of Experience.

Thanks so much, Lacey, for having me today.

Yeah, I'm excited.

Before we get into your background and what Engine is doing and all the amazing things I know we're going to cover today, I want to tease our audience a little bit with a few stats that I think might blow their mind.

The first one is that you guys launched your first AI agent in less than two weeks.

Since then, you've reduced average handle time by 15%.

You're on track to save $2 million a year from this.

And you've increased sales rep efficiency by 67%.

Those are some amazing numbers.

I can't wait to dive into all of that.

But Molly, I want to hear from you really quick, just like an intro to you.

So we've teased our audience on what we're going to cover.

Who are you?

Yeah, absolutely.

No, I'm excited.

And like, as you went through those stats, the thing that just sat in my mind is like, we're just getting started too.

So like we have done this in such a short period of time and just we're continuing to keep innovating and it's been fun.

So I'm Molly Bodensteiner.

I'm the SVP of operations at Engine.

And so really accountable for, you know, all things people, process, data, and technology across this org, which has made this shift in digital transformation and AI so much, so much fun.

Right.

And just

again, just getting started.

That's awesome.

And Molly, you've been at Engine for eight months.

Is that correct?

Less than a year.

Yep.

Less than a year.

What did it look like when you came in?

Because I feel like you basically just came in and hit the ground running to like accomplish all these goals that we just covered.

Yeah.

So, coming in, the thing that I appreciate the most about Engine is like there has been this appetite for digital transformation and innovation.

So, I wasn't coming in and being like the change agent per se, but I think just being fortunate enough to come into a team that was curious about this and then a company that was also supportive of really like starting to push the boundaries of what we could and should be doing from a tech perspective has

made this just fun, right?

Like, and you know, we talked about the agent in 14 days, right?

Is not because like we had this pressure from senior leadership to launch an agent in 14 days.

It was really

how do we put the right focus and really start to like test what we can do and see what happens versus I think you see a lot of companies that are like, yeah, we're going to build this huge digital transformation and AI strategy, but they're not actually getting anything done.

So I think just the culture at engine of being able to like move quick, innovate, learn fast has definitely helped.

And then the other thing is like just where we're at in the evolution, right?

Like basic automation to like today's revenue intelligence platforms and like how much just changes, I'd say like daily in this space, just the things that like I would have assumed were going to be impossible earlier in my career or even a year ago, right?

Are like now.

possible in a few clicks.

That's amazing.

That's amazing.

So speaking of your early career, could you just guide us through a little bit of your background?

I know that you've like

love, live, breathe, love RevOps, but like talk me through how you got to that point.

Yeah.

So I, you know, I've been in RevOps since before it was cool, right?

I'd say I truly started my,

I mean, that needs to be a t-shirt.

Yeah.

I need to get you that t-shirt.

And I laugh because like I go back to like my internship in college, right, was working on access databases in a sales organization.

And one of the things that I did was figure out like, how do I centralize RFP answers into an access database so that I don't have to keep populating RFPs manually, right?

So like it really, you know, was revenue technology, but like it came from like just the curiosity of like, how do I make things better?

How do I make things more efficient?

And how do I use like technology to like optimize processes and improve experiences?

And from there, that just opened up more opportunities to like move into CRM and like mark, you know, say marketing automation platforms back then, it was like really email service providers, right?

We weren't even doing marketing automation, not to age myself, but like just really gravitated towards like this revenue technology that helps

not only make go-to-market more efficient, but like more importantly, it like enhances the customer experience and the customer journey.

And I think like that's been like the big hook that's got me into operations is like, how can I actually bring again together the people, the process, the data, and the technology to like have direct impact on customer experiences, whether those are internal customers and stakeholders or external ones.

Yeah.

What I love about what you're sharing about your background is you sort of come come in and you're, you're like, I don't, I, I, everything's being done this way, but what if we could change that and do it differently?

And I, I feel like that might be sort of your modality throughout your career is just like, these are the assumptions we're making.

And how can we make it better?

And how can that impact the customer in a better way?

And I feel like the companies that are going to thrive, the leadership teams that are going to thrive are the ones that continue to ask those types of questions.

Like, why do I have to fill in this RFP manually over and over and over again?

Why can't I do something different?

And I think a a lot of the innovations we've been seeing in the space lately are the result of that first question of like, why is it this way?

Can't it be that way?

So finding, you know, this company, Engine, that has that culture that's mirrored back to you is really cool.

So could you tell us what is Engine?

If I go to engine.com, what am I going to find for those who've never heard of the company?

Yeah, absolutely.

So Engine is a business travel platform, right?

So we're working with primarily SMB businesses on just helping them book flights, hotels, hotels, rental cars for business travel.

And we have over a million travelers right now, and we're supporting companies of all sizes.

But what really sets us apart is like no contracts, right?

No membership fees, no agent assist fees.

So we're really just helping businesses save time and money.

And the fun thing about Engine, since 2018, we've averaged 70% year-over-year growth.

We've scaled to over $2 billion in valuation and are almost at a thousand employees.

And so it's been a really fun just growth experience, not only from the company perspective, but then you're overlaying like this efficiency and digital transformation on top of it.

Yeah.

Oh man, that's so cool.

So kind of a smaller company compared to some of the like large enterprises we've had the opportunity to speak to on the show, but by no means a small business, right?

So how many countries or not countries, I guess countries or states are you guys operating in?

Is it just in the U.S.

or are you around the world?

So we're located in the U.S., but we serve clients all over the world.

Yeah.

Wow.

And with that, how many users did you say you guys have?

We have over a million, million travelers.

Wow.

Okay.

So you mentioned this less than two weeks, this 14-day rollout of your first agentic AI agent.

What did that look like?

When was this?

This was late last year, right?

Yep.

Yeah, late last year.

So towards the end of the year, last year, we rolled out Eva or Ava, depending on who you talk to.

The phonetics changes, but it's really Engine's virtual assistant, right?

And so one of the things that we did was we looked at our customer success side of things and said, you know,

what's our highest leverage opportunity for automation, right?

And what came from the data was hotel cancellations.

We had 300 requests daily, which is a lot of operating overhead, right?

So as we looked at our AI strategy and like what was possible here, we decided to not take the broad strategy and instead like put all of our resources just into a single use case and a single workflow and build off of that.

And so it might not have been like our sexiest problem, right?

Or even the most like technically interesting, but when we look at the volume, right?

300 cases daily, like it's costing our team a ton in productivity as well as like just customer experience that can be furthermore simplified, right?

If I know that I need to cancel a hotel, you know, am I as a consumer and the experience, like, I'd love to go talk to just a chat bot and get it done versus like having to pick up the phone and call or like send an email and wait for a response.

So what we did is we took, took and really ran like a two-week sprint on this, right?

And the way that this was possible is, and transparently, we worked with a partner.

So we worked with a trusted Salesforce partner, especially as, you know, Agent Force was still relatively new on the market at that point.

So making sure at that point, very, very new.

I mean, it had been out for what, like a month at that point.

I think they announced it at DreamForce last year.

Yeah.

Yeah.

We were one of, you know, the first companies to like just be fully operational, like GA public on Agent Force.

And so using a trusted partner was super key.

But like the other part of this is like, we ruthlessly prioritized.

It was, you know, one of those things where as you start seeing what you can do, like it's like, oh, we could do this, we could do this, we could do this, but like kept going back to like, this is scope, this is scope, this is scope to really make sure that we could focus and like build the right thing that then we can build on top of and we have built on top of, but like taking that single use case and getting that out the door.

And I think that that's where a lot of companies I see get stuck.

Not even companies, right?

I even see it internally with the team.

It's like, oh, we get into this, like, we can keep doing and doing and doing and adding and adding and adding, which is great.

But if you don't ever get like the first thing out the door, you're not learning and you're not figuring out what I'm going to talk about is like where it's going to fail.

So you can start to course correct.

Yeah, it's such a problem.

I mean, I see it in our own company even.

We've got a small team and we're doing media production, right?

But I was talking to a gentleman that I would like to work with to help us produce more automations and maybe, you know, some AI agents that can support what we do.

And I was like, here's an idea.

Here's an idea.

Here's an idea.

You know, like, there's a hundred different ways that we could do this.

And, and I do think there's still like a little bit of overhype where, you know, he's like, actually, that idea sounds easy, but it's actually not.

And so like sorting through all these different potential use cases is so difficult, even in our little team, that I can't imagine how difficult that is for a team of almost a thousand employees, like what you guys have, or even like a larger enterprise.

Is how do you take the, kind of pause the brainstorming, or at least make it, I don't know, structured in some way.

So that way you can identify, this is the thing we really want to focus on.

And this is the thing we're going to, you know, what did, what did you, how did you phrase it?

You said ruthlessly go after.

Ruthlessly prioritized, right?

And like, if we wouldn't have done that, I love that, we'd probably still be building Eva, right?

Like, because we'd keep adding and keep adding and keep adding.

And like, you know, it's one of those things where it's like good versus good enough, right?

And so, like, really focused on like the good enough that we can learn and iterate without sacrificing, obviously, the customer experience.

But like, getting something out the door to learn is going to move us significantly faster than sitting and trying to live in this world of like perfection.

Because realistically, with agents, there's no such thing as perfect that's that's so true yeah i do want to get into that a little bit more but starting out with this

first tool that you guys made the cancellations right like i as a customer can go in and cancel and and eva would handle it for me what was that initial feedback reception like that you got maybe in December or January, you know, the couple months after it had been launched?

Yeah, absolutely.

So, you know, one of the interesting things is like our actual CSATs from Eva's interactions were higher than like traditional chat, right?

Because like agents were

or customers were actually getting faster response times, right?

And like smoother handoffs.

And, you know, one of the things that I think we

did really well and have done really well is like we designed with the customer experience in mind, but like we're up front with customers.

You're interacting, you're interacting with AI, right?

Like, I mean, let's be honest, as consumers, we already kind of know that anyway in most situations, but like being transparent and like building that trust, I think was things that we learned through those like feedback loops and like the customer knows what to expect.

But the other big thing that we continue to learn is like where limitations are, right?

And like, how do we make sure we're setting the guardrails of like our design to make sure Eva knows like when to escalate, right?

And like building off of that, like we don't let her let her, right, let her struggle through complex scenarios.

We make sure she knows like the right handoff.

But then within that handoff, we have all of that context shifting so that the customer doesn't have to repeat themselves and we're not sacrificing the customer experience based on like the limitation of the technology.

And I think that that was a big, a big learning for us as we looked at the CSAT scores and like what was happening.

It's like, is Eva really handing off at the right time and managing this the right way for the experience so that it's frictionless for not only our customers, but also for the agents that pick it up.

Yeah, oh no, that makes total sense.

We spoke about that in the pod a little bit is how important it is that if the AI agent can't handle it, that I don't have to repeat anything that I said and it can be handed off really smoothly to a human agent that can do this for me.

So I love that you guys really early on were solving for that and noticed that as a problem.

And I don't think you would have.

If you weren't moving at the pace that you did and being as ruthless with what you were

focusing your attention on as you had, you may not have gotten that feedback so early on and solved for that so early on.

So when you guys launched, you launched this cancellation tool.

After that, what did you do?

You kind of evaluated, how is this working?

When did you start

layering on, oh, this is the next function we're going to add?

This is the next way we can service our customers.

Yeah, absolutely.

So I think from there, what we started to look at were like, what are the different topics that we want to try to cover?

Right.

So like we've now taken this single topic here, you know, cancellations.

And it's like, okay, how do we, how do we continue to iterate off this?

And like, one of the things I'd say, like, we learned the hard way in this was like, we then had these like 15 separate topics that we wanted to take back to like scale and those pieces.

Like each of them were really focused on like narrow tasks, like a reservation change or like an inquiry on a car rental.

And what we learned is like, that's actually not how our customers think, right?

They don't think in these like isolated steps.

And so by trying to start to roll those out, like Eva really struggled with like that context understanding and like being able to properly route that way.

So, what we did was we then took those 15 steps and we cut those in half.

And that actually changed Eva's performance by just being a lot smarter on the design behind how we talked about things and going into now more of like what are our consumer experiences?

How do they approach these?

How do they talk about those components?

As we started to look at like modifications of reservations or adding, you know, adding

somebody to a room and those pieces of things.

The other items that we started to build in was like contextual awareness of where the user was in the experience, right?

So if you're sitting on the sign-in page, like Eva is looking at different things than if you're sitting on a property, right?

So like understanding where you're at in the experience.

So like if you're on the forgot password page, like, hey, are you having trouble logging in?

And like we've built in like more proactive, proactive outreach, right, from Eva to try to generate that support, but also looking at like, okay, you're looking at these properties.

What do we know about Lacey, right?

We know Lacey likes five-star properties in New York City, right?

Like, how do we help take your historical shopping behavior and help to apply that to your current search?

Right.

And like helping build that experience

based on the data that we have on you.

And again, I'm going to say a non-creepy way, because I do think you've got to be mindful of like, we all know everyone.

We expect, right, as consumers, like i expect when i call somewhere you know my order history you know what i've done like i don't want to repeat myself and like those types of things but like and i want you to use it in the right way but like again not in the like the creepy way yeah yeah no i i i like that personalization because it's almost like I will have a different experience of Eva, Ava, however you want to say it, than

someone else using Engine, right?

Like, because it's been kind of curated and optimized for me.

And I, I want that.

I mean, I, I think about my chat GPT instance whenever I'm communicating with ChatGPT.

And then I look over at my husband's instance and I'm like, the way ChatGPT talks to my husband is completely different than how it talks to me.

And so that's really interesting.

And I love that idea of bringing that level of personalization into these chatbot interactions that you might have with different companies.

So I think that's really cool.

Yeah.

And we, you know, we work with travelers, right?

So like who are the business travelers, but also who are the administrators, right?

And so the experience is very different for the person

who is traveling versus the one who's actually managing kind of like the back end operations for the organization.

So being able to identify, again, based on their role and their usage, like what problem are they likely looking to solve helps us be more proactive in how we engage.

Yeah, that's great.

So fast forward now, almost, almost a year.

I mean, I'm like skipping ahead a little bit.

We're almost a year in since the first AI agent has been launched.

What are you seeing results wise?

I teased it up a little bit at the beginning of our episode, but tell me more.

I want to hear from you.

Like, what's the reception been like from customers?

What progress has the organization had?

Yeah, just initial results from these different levels of rollout.

Yeah, absolutely.

So, on the EVA side, right, we've seen that we're pacing towards close to a $2 million in savings annually.

We've reduced average handle time for cases by 15%.

We've improved productivity of our customer service reps by 10%.

And we are really like CSAT scores are above 90% right now, which is, you know, extremely high for this industry.

Usually benchmarks are around 83.

So just again, continuing to see great progress.

And as we add additional use cases, like we're seeing more savings, right?

And the part of this is like, it's, you know, I think when people talk about like savings of AI, you naturally assume like you're reducing headcount and you're, you know, cutting jobs.

Like we're not, right?

We look at this as like ai plus human components but like you know back to like not our sexiest problem but like the one that was the most operationally time consuming and expensive and a friction point in the customer experience like has freed up our actual support team to work on more higher value customer inquiries and needs and allowing us to like be more thoughtful there versus again the 300 cases of just canceling a flight or canceling a hotel um and you think about job satisfaction and those components, like it's just a no-brainer.

Yeah.

Yeah.

I mean, I, you can feel it as a customer too, when you call into a center or you're chatting even online with a human agent, right?

If they're burnt out, if they're not.

And being able to remove some of the stuff that's sort of like clutter with these AI agents does allow that interaction between customer and employee to be so much stronger, so much better.

So have you guys seen the job satisfaction score goes up?

Yeah, absolutely.

Like our internal,

what is it, an ENPS, like has continued to improve.

Attrition is lower, right?

People are less likely to leave because they have more fulfillment too.

Wow.

So it's been all in all, like a really good, good opportunity for us.

And again, as you think about career pathing and growth, right?

You're giving more time for cross training.

You're giving more time to do more of that higher value work for the business and for the customer.

Moving past the service use case, we talked a little bit about sales as well.

Could you talk to me about how you're using these tools with your sales reps in addition to your customer service reps?

Absolutely.

So, at the same time, we were kicking off Eva on the support side, we started initiatives on our sales side, specifically focused on our new business, outbound sales.

And so, a couple of the things that we did there were building agents for prospecting.

And then we built some rep coaching tools as well as call prep and follow-up.

And so, this, as we've continued to grow, as we talked about, you know, reaching a thousand employees, we've been increasing our sales headcount, I think, by 400% this year.

So we're bringing a ton of new

sales reps in.

And, you know, one of the big bets we made as a business is like, how can we use AI to improve rep productivity?

And in Q1, we were able to get a 67% lift in rep productivity, being able to really streamline, you know, reps, the amount of time reps were spending on research and prep versus like the time they spent actually

in front of customers.

And a lot of that was driven based on our prospecting agent, which really worked to surface like who are the most relevant accounts that we should be going after?

How are they scored?

How are they prioritized?

Let's source the right personas that we should talk

to from that and actually serve that.

directly to the reps with the suggested personalization for outreach.

One of the big things that like I feel really passionate about here is like in the world of like AI SDR and all of that, like we all see that and we feel feel that and like we kind of hate it and it's cringy um right is like what we're doing is not replacing human judgment right like we're augmenting it with like better data and faster insights so that reps can be more productive in like the decisions that they make and the actions that they take versus trying to automate the rep's job yeah yeah we uh spoke with a woman a couple might have been a year ago on one of our other podcasts where they're making like an ai person like an ai rep that you can actually talk to and like how I'm talking to you, you can engage.

And while I don't want to say whether or not like that's the future where we're going, I hesitate with it because it just doesn't feel like super,

I don't know, like there's just like this uncanny valley there of like, I don't know if I actually want to talk to you.

Like you're just AI.

There's like no emotional connection.

It is interesting that there are a lot of solutions focused on this of like, how do we replace reps?

When I just don't think we're there yet.

I think we're still very much in this place of how do we augment people because it's still,

you know, business will always be personal.

it'll always be people want to buy interaction and there's a trust aspect to it right and like

you know i think so much of that still is trust yeah 100 i mean to that point right it's like if this ai tool has been optimized to

to sell to sell me right like the whole tool has been designed to sell to me then i inherently feel like I can't trust what it's telling me versus like a human where they're like, actually, you know, hey, this thing, maybe this isn't the best fit for you.

Or like genuinely, I'd be be like, oh, I actually trust you now because you're being super honest with me.

And I don't know that the AI would be programmed to be that honest, right?

So, yeah, I think there is just like this gap there.

But to be able to take these tools and support the sales rep, I think is so amazing and impressive because so much time is just spent in the like recording the call notes or looking up someone's information.

And I know personally, as someone who sat on the side of sales demos, I actually don't want to see the 50 slide deck you have.

I want want the like 10 slides that are relevant to me.

So, if AI can help you understand what's relevant to me, then I'm all for that.

Absolutely.

And again, I think we expect that, right?

Now, as consumers, like we want that experience, and like we've now elevated our buying experience.

So, as businesses and sales teams, like if you're not, if you're not elevating your selling experience, you're going to

miss that mark, right?

And like, if I can save reps from having to go and like re-input data from a call into CRM because CRM knows because it listened to the call, right?

Like that's time savings where the rep can now spend more meaningful interactions on like writing that right follow-up and making sure we're delivering on like the action items that came

from that call versus like the, I'm going to call it like the back office task that, you know, are still valuable and like need to be done, but like can be done with support of technology.

As we've been hiring, you know, all of these, all of these, you know, new, new reps coming in, right?

Like being able to onboard them efficiently, right?

So like being able to use call coaching and like call scoring and like helping streamline just even manager productivity and how we help support new hires and identify again, like gaps earlier on just sets them up for more success, too.

That's what I, that's what I wanted to ask you about was this like coaching, the way that you're using AI for coaching.

Can you talk a little bit more about exactly what that means?

So if someone's on a call, then they get like an AI generated report that's sort of like, hey, here's some feedback of how you did.

Yeah, we've got a couple different mechanisms for coaching.

So one of which is like actually having more like a simulated kind of call experience, right?

So if I'm, if I'm trying to sell to Lacey, like I understand Lacey is, you know, she's my ICP, she's my buyer at this company, like I can go do a simulated role play.

with Lacey, right?

Using CRM and like real data, but getting reps just comfortable and having those conversations, right?

It's kind of, you know, the same way you'd role play with, you know, during an onboarding process with a manager or somebody else, like they're doing it now with AI.

And like then getting a report back of that, right?

Like, hey, you didn't have a value-based opener or you missed, you know, bank qualification or like whatever comes back from, from that.

So we have that more of that like onboarding coaching and like skill refinement.

Then additionally within like our call recording, we have scorecards set up and we have AI scorecards based on, you know, discovery calls to demo calls to kickoffs that are looking at like, are we,

you know, and again, I hate being like, are they following the script?

Cause it's not a script, right?

But like, are they hitting the right key details here?

And then looking at that scoring, but also managers are expected to score calls too, and understanding like, how are managers scoring, right?

Do we have the right expectations across teams and how we're looking at this?

But then also being able to report back to reps, like, hey, here's where you should focus and work.

Like, how are we seeing that increase and measuring that over time too?

I love that role play scenario because I think it would allow people to have so much more practice.

You know, like I can just keep practicing.

I can keep getting better at it.

And I can kind of do it in my own private scenario versus having to like try and fail, try and fail in front of a live person or my manager.

I mean, there's a lot of different applications for that besides just sales.

Like I think about my mom works in contracting and she's trying to get her unlimited warrant.

And part of that process is, you know, going through like an interview process of how would you handle X, Y, Z scenario.

How cool would it it be if you had an AI tool that you could just practice those different scenarios with?

Or there's so many educational applications of that as well

with school.

Like I'm becoming a lawyer, how would I handle X, Y, Z scenario versus it just being a written thing that I answer?

It can be an actual back and forth engagement, seeing how I respond in real time.

I really think that's cool.

And I feel like we're just at the tip of the iceberg of how this can be applied across the organization.

And I think about, you know, like SDRs, right?

Like if you're an SDR, like what's one thing you have to be really, really good at?

And it's like rejection, right?

And it's, you know, it's probably, it's blade.

That's so bad to say, but it's like, it's blatant rejection on a phone call, right?

And so like, how do you just start to get really comfortable with that kind of thing?

And it's, you know, a lot of that is like, it's skill-based training and like just going through those scenarios.

And the thing that I like about like more of the AI prompt and like the role play is like, you don't know what you're going to get.

right so it's not like oh i'm going to like keep using the same thing and it's going to be the same outcome each time and i'm just going to get really good at just like the same thing.

It's like we have it where you might get just a total wildcard response or you get somebody who's more aggressive or, you know, you get,

you know, somebody who's going to challenge you.

So like outcomes are always different.

And like that is what they're going to interact with day in and day out.

So just getting them really comfortable there.

100%.

Yeah.

Yeah.

I mean, even from like a customer service rep

position, that makes total sense.

Like I'm a customer service agent being able to, someone comes on the phone and they are hot and they are angry.

How do I handle that?

Someone comes on the the phone they're crying how do i handle that like it would be it's definitely a lot of different applications for it so molly you have implemented

several different layers now there's several different use cases of agentic ai in the organization your team has you know implemented that your leadership has said yes we want to do this there are a lot of companies that are not as quick as you guys have been you know they might have heard about agentic ai maybe they were at dreamforce right and they saw the rollout but they are still not yet in implementation phase They don't even, they don't have results, right?

What advice do you have to leaders that are going through what you guys went through about a year ago?

Yeah, I mean, I think it's simple.

It's like, stop planning and start doing, right?

Like at the end of the day, like just.

Don't worry about it being perfect and a perfect strategy.

Like worry about just picking one real problem and solving that super well, right?

Like for us, you know, it was setting, it was setting that like 14 day deadline, right?

And like shipping something that works and then just measuring everything and iterating.

Like you're going to learn more if you say like, okay, 30-day deadline, we're going to get through this.

Like it's this use case.

We're going to put a tiger team together and knock it out.

You're going to learn more in 30 days of like true, real implementation than you're going to in six months of, months of planning.

And let's be honest, in six months of planning, the tech is going to change.

10 times over that like your planning is going to

continue to be wrong.

Right.

And like more things are going to come out and like like you're just never going to get anywhere because you're going to be stuck on just trying to get somewhere.

Yeah.

That's so true.

That's so true.

So Molly, getting into the people side of this, you mentioned earlier it's human plus AI.

And I, I love echoing that.

I think it's so true.

What was it like initially with your actual teams that were using this tech and implementing it?

out the gate like this you know in November December what was your what were your teams responses initially were people like super eager Was there a mix?

Was there fear?

And now that you guys have sort of like rolled this out, what does, how does a team respond aim?

Yeah, absolutely.

I think, you know, one of my big, I'm laughing because I'm like, I think one of my biggest lessons on this is like, I had,

I had it all wrong on like how adoption would go.

Right.

And I think, you know, we had really, yeah.

And I'll, I'll do it.

Yeah.

So on, on the service side, adoption went really well, right?

Like they're very eager.

Like, I think the value prop of like, wow, I don't have to deal with these tickets anymore.

Like this makes tons of sense.

We, you know, have co-pilot, we rolled out co-pilots, like the, just the efficiency gains in like their day-to-day were just so

just obvious for them that it was like, this is great.

Like, I'm leaning in.

This is awesome.

Right.

On the sales side, though, which like in my, my assumption was like they were going to be the more like, let's go, this is awesome, willing to adapt.

Like, it was very much like, i wouldn't say resistance it was almost like oh that's nice i'm gonna keep doing what i'm doing kind of thing right and like i think and i think part of it was like it wasn't it wasn't optional on the service side right like it was required and it was just like built into how how the experience works and on the sales side it was more like it was optional right like do you want to use these tools do you like the call recording and all that stuff like and the onboarding stuff like yeah they had to use that but like the prospecting features and like some more of like those types of co-pilots, like, weren't, we weren't saying you have to use this, right?

And like, it's part of your job.

Like, we kind of like let them decide what they wanted to do.

And so, I think a couple of the things that helped kind of like, I'm going to say, like, turn, turn the ship around on this that I learned is you have to like build transparency in this a little bit, right?

And like with the sales team, there were, there were kind of two, twofold lessons that I learned.

One of which is like, we have to show people exactly like how the AI works, right?

To the point that like, it was like actually opening up the prompt and going through the prompt with the sales team, right?

Like, and it wasn't just like, hey, trust me, like, this is what it's doing.

It's like, this is exactly what it's doing.

And it's, you know, one of those things, like, I don't actually want to know how the sausage is made, but like, this is a scenario where they did need to know like how the sausage was made to build their trust and adopting it.

Right.

And so we went through like the reasoning.

They understood like, we showed them like hey if it's a bad output right and it doesn't have this confidence rating like we don't give it to you right but like we're only giving you the things that we're confident in and so that was a big piece that like we really had to like hit home on like how does this work and like what is this doing and why and i think some of that was just like the trust that we understood like what they were doing too right so like somebody coming in and being like hey lacey like i can help you do your job better but i don't do your job is a little bit of like uh like,

a little bit of like, hey, let's try this approach again, too.

So, like, we also had to make it obvious from day one, right?

Like, so 20 minutes learning a new tool versus like saving five minutes, like that adoption curve like had to be a lot faster versus like, if I can save you two hours of research in five minutes, like adoption will be immediate, even if maybe it's not as like good as they think it's going to be, like, their quicker ROI was there.

So, we did do some like refractoring of like our scoring and like like how quick and how easy and like where we had it run automatically versus like they could manually run.

So like lesson one was like showing, not telling, and just continuing there.

Lesson two was find people with influence and let them be your advocates for this.

You know, I.

I do not have I, right?

I am not an influencer.

And like, that is very clear of a weakness that I fully own.

I am not going to rally the troops.

And so it was really like, look for who that like salesperson is that is that influential person who can help.

And the good news is, is most salespeople are influential because they're sellers and get them to really help to advocate for you.

I think one of the things that we fail at is we tend to pick the top seller as the like person that you put as that influence.

They're going to be your worst adopters because they already know what works and what they're doing is working for them so getting them to try to change is going to be they will be they will be your resistance people more more often than not so like don't go don't go directly to the top right like look in that middle and it's usually like that person who like people naturally trust and like go to for advice they're like have genuine curiosity like they're willing to experiment right but like they tend to be your mid-level performers who like just want to get better too, but not the ones that like are your top performers that you're, they're not going to rock their boat because they are doing okay right now.

Um, and so I think for us, it was like find those people, get them in early, right?

Give them some level of incentive to get in early.

Like, this is one of the things that I think is like really important: is like, hey, we're going to try something.

It may or may not work.

And if it doesn't work, like, we're not going to punish you.

for trying, right?

So, like, figure out if there's some like quota quota relief or something like that you can do there, because like at the end of the day, like, you don't know, right?

Like, especially like, well, you're moving quick and trying to find it out.

So I think like those were, those were really big things to drive for.

Other things that like we've now done is like we shout out the wins a lot more too.

Like, okay, great, like we built this efficiency, but like we're still talking about like, hey, this was like an, you know, an AI sourced account that closed, like way to go.

And like, look what they learned.

And like, look at like, you know, when we talk about like how we improve velocity and like where that's demonstrated, like we know based on like the productivity savings, like it's there, but like we're continuously iterating that to keep driving adoption.

Yeah.

Yeah.

It makes a lot of sense too that you're sharing sort of the story of the win, not just the stat.

Because if I hear X number of percent

productivity gained, like as an employee, I might be like, cool, what does that mean?

But to actually have a place where I'm seeing like this account, oh, I I know that logo closed from this thing.

Like, that is the only way that it's going to like concretely sit in people's brains that this actually works and this is doing something that I might be interested in and a tool that I actually want to work into my workflows.

Um, I love what you shared about

sellers being like way more ultra-critical of like the tool and wanting to understand it.

Uh,

it makes sense.

Like it makes, it makes sense, though, that they'd be like, no, like I'm about to get on the phone with someone.

I need to trust that like this is actually the thing they care about because I don't want to flop on the phone and be like, and say something that's completely wrong.

So it makes sense that they want the under the hood look at how this research is being done and what the prompt is.

But I imagine that that's not, that's not as easy, though, as like, as just showing it to them, right?

Because You could pull up an AI agent right now and you could share it with me.

And if I don't have the knowledge and understanding of how it works, I would still not trust it because I'd be like, I still don't get what you're doing.

So was there like a level of education you had to do besides just showing under the hood how it worked, but actually like, here's what an AI agent is.

Yep.

Here's how prompt engineering works.

Like, what did that education process look like for your team?

Yeah, absolutely.

And I laugh and smile because like it is.

continuous education, right?

And like we are

constantly doing this too, because like things keep refining and keep changing and like evolving here.

And, you know, it's like, what is, you know, you start with what is a prompt.

And I, I, I chuckled when we were talking about this more of the fact of like one of the things that like, as part of our sales motion that our team does is they go to a company's website, right?

And they're looking for like key indicators that this company might travel.

And so that is like what they're taught to do as they're prospecting.

Well, we have an agent that like goes and does that automatically, right?

So like we know what the keywords are.

We've sat in the motion.

Like we have the agent go look at the company website and give back like a score.

And then the deductive reasoning of like, why do we think company ABC travels?

Well, here's what we see in the job description.

Here's what we see like on social media about them hosting these events, right?

And like, it's very prescriptive, like coming back.

And so I still laugh because like it was,

let's put the person against the robot type of thing because it was like, they were still going to the website.

And I was like, why?

Why are you still going to the website?

Like, what don't you trust, you know, like about what's coming back from the AI?

And it was like, well, I just, I need to do it myself, you you know, and it was like, we had to get to the point of like, do it yourself and come back.

And like, let's look at how long did it take for you to go do this?

Okay.

It took you seven minutes.

Right.

Okay.

We had the AI do this.

Like, here's how much more it actually captured in 30 seconds versus like the seven minutes that you spent here.

And so like, we had to kind of like do more of that.

Like, hey, like, you're going to get to the same outcome faster.

doing this.

But then on the balance, like the part that like will get you is like, it's not always going to be perfect, right?

So like we did still have to set like those expectations.

Again, like, human reasoning, like, you've got to like actually read through before, like, give it a sniff test a little bit too, right?

Especially as you pointed out, like, they're going to get on the phone and talk to somebody.

So, like, they want some level of accuracy.

And that's where we've like built in more feedback loops.

And, like, if AI comes back and like hallucinates, right?

Like, we want to have that redundancy tracking and like QA on top of that too, to just make sure we're giving the best possible results.

Cause I would rather give no result for something than give something that was not close to accurate.

When we talk about quality control, I think there's going to be more conversations about this in the future.

Like, I just, I think it's becoming a huge component of these AI rollouts is that there is always going to be a level of risk that you accept.

And that's just true with humans.

Like, not.

I mess up.

Everyone messes up.

You know, sales rep goes to a website, makes an assumption, ends up being wrong.

Like, but we, we are less forgiving of technology than we are of human mistakes.

And so this like quality control conversation, I think, is one that's, that is important because there is, there has to be a level of risk tolerance.

You have to accept that there will be some mistakes, but it's like each company needs to decide within themselves, what is that level?

What's the risk I'm willing to take?

So how did you, how did you guys kind of determine that?

Like, is there, you said there's like an accuracy scoring.

How do you determine, like, how do you even come up with like the accuracy algorithm of whether or not this is correct or not?

I mean, you, I'm going to say like, you don't, right?

Like you come up with something and you monitor it and you manage it, right?

Like you are still deciding things.

And I think a couple of the things that we, I'd say, did really well when it came to like quality control, especially like a lot of these are customer facing, right?

So like back to risk tolerance, like my risk tolerance is a lot lower on something that's like going in front of a customer than it is potentially on something that's like back end that like I know I have a little more like luxury and flexibility with.

But like quality control like has to start in design, right?

It can't be at the point of inspection.

Like, so when we thought about this, it was like, what are the guardrails we're putting in our workflows, right?

Like, especially like with Eva, back to her, like she can only access certain data, right?

She can only make certain types of changes.

Like she has to escalate when confidence drops below a threshold.

Like she doesn't sit in analysis, paralysis.

Like we built that design

with that quality control in mind versus like going out and then being like, ah, crap, we got to like go back and like figure out when she's wrong, going rogue, right?

And then the other pieces of this is like the first hundred interactions, right, had manual human review.

We're on a 10% ongoing.

We actually have, I'm going to laugh when I say this, but like we have AI checking AI on our on our pieces to help like just drive like more of those like checks too.

But like we're still doing manual review.

And I hope that we continue to always do manual review.

And like maybe that'll drop lower than 10% when we have more of that confidence.

But like, again, tech is always changing.

Things are always changing.

Like we want that alerting, right?

And that's where like the CSAT and like these metrics are so important because they also give a signal if something starts to seem off as well.

So we can catch issues faster.

And, you know, the other thing is like the perfect automation doesn't exist, right?

Like, I think just like being okay with like that and instead like optimizing for consistently good outcomes versus like perfect ones is like another really important thing to keep keep top of mind as you're doing this design and like sometimes that means like ai is going to err on the side of caution and like escalate more than necessary but like for us that's better than a bad customer experience because like again that's going to the bad customer experience is going to cost us more yeah oh for sure for sure that makes a little sense part of building a lot of these like ai agents and these tools that you guys as companies investing in is actually selecting companies to work with partners that do this right but also like what program you're going to use right like are you going to use chat gpt as the basis of this are you going to use cloud Like, what, what company are you going to go with that's going to help you develop out your LLMs, right?

Or I guess you could go the extra step and actually make your own LLM, which I know some companies have done, which is just like a huge undertaking and not something I would recommend to smaller organizations for sure.

So when I think about this, though, and I'm like, okay, cool, we've made these AI agents, we've made Ava, Eva, we've started to invest in all this stuff, but now there's like this underlying structure that is not your company's, right?

It's, it's a chat GPT, it's a cloud, it's, it's a whatever, right?

Or maybe it's Google.

I get concerned about that because I think about like, oh, well, what if chat GPT quote unquote fails?

I don't think it's going to fail, but like, what if something underneath there doesn't end up working or they're not moving as fast in innovation as a different company?

And now I built so many AI agents that are based off this tool.

Like what happens if this doesn't work out?

And so I do get concerned about that.

I mean, this is something though that's not, it's not new.

Like this is a technology problem.

Anytime you invest in new technology, it's possible that the underlying framework might break or not work anymore.

But how are you kind of thinking about the risk associated with investing in just like one algorithm that's going to be that source of truth for you moving forward?

It's a great question, right?

And I, you know, I have a paid personal subscription to Claude as well as ChatGPT.

And I couldn't tell you why I use one without the, one over the other when I'm doing something.

But I have some, some preference to one versus the other based on what I'm doing.

And like, I don't know.

I couldn't even tell you what that is.

But I think to your question, like platform lock-in is like a true concern, right?

Like it's, there's like a bigger risk to trying to like hedge too much and like ending up with like a Frankenstack, right?

And so from the engine side, we deliberately chose to like go deeper with fewer platforms versus like being spread across too many.

But with that, you know, we use Salesforce and AgentForce and like we're betting on one ecosystem, right?

So like that's the ecosystem, not just the AI capabilities.

But then we've got, you know, our cloud and our open AI, but knowing like if they change, let's say their pricing tomorrow, we can swap in and out our models the way we need to.

And I think one of like the core things in our design here is like we're not having to rebuild like all our integrations and like all our workflows because of how we took more of like that.

ecosystem approach I'm building on top of, you know, Salesforce, but like we have these abstraction layers now that we can leverage, right?

So we don't hard code specific AI models like into our workflows.

We have more of like interfaces that like sit underneath that support that underlying technology it's definitely more work up front right and like it was a cognizant design decision that we made to give us that flexibility later on because otherwise like if we have to start over like we essentially have to rebuild right and so like when we think about modulization and like building for scale like that was super important to us and you know we i'll be let's talk about the hard lessons we learned right We learned that lesson the hard way with Eva initially.

We did everything as like one monolithic agent.

And then when we wanted to start adding those new capabilities, right, we had to do a little bit of rebuilding.

So now we've got module components that we can mix and match.

The nice part of that about that is like as we expand Eva outside of just, you know, our main travel platform and like actually look at like more of our back end for our partners and suppliers, like we can actually start to mix and match those capabilities versus starting over from scratch, which definitely helps too.

But like, I think the philosophy with AI is like, assume everything's going to change and like go in probably with that assumption, right?

Like models are going to get better.

New vendors are going to come out.

Like business, I mean, business requirements are going to change.

So like make sure your back end can adapt and like think about like how you build more like plug and play.

And like, you know, we've invested in like APIs and like data pipelines that have like more openings to connect to like different platforms and systems versus like locking us in as well.

So that, again, we don't have to touch core infrastructure when we want to make these adjustments.

That's really smart.

And are you, this might be, I don't know, a question for you personally or just how broadly Engine is thinking about this, but how do you track?

you know, oh, there's this new AI tool.

Let's try it out, you know, or there's this new startup, this new vendor that could do something really, really cool.

How are you actually evaluating like whether or not that's hype or whether or not that's something that I should pursue?

Are you personally just experimenting with stuff all the time?

I know like personally, I am constantly like new AI tools out.

I'm over there trying it out.

Oh, it didn't work.

Moving on to the next thing.

But it also is really hard and it can be a bit distracting.

And I know we've made investments in AI tools that we thought would help us in certain ways that ended up just not working out at all.

Or like fast forward two weeks and the platform we were already using now offers the same function.

And you're like, well, now I invested in this thing that I I shouldn't, shouldn't have, have done that.

So, yeah, I'm just kind of wondering how you're evaluating like when to step in and say, yeah, let's use that personally and then just like from a business standpoint.

Yeah, it's, I mean, geez, this like, this landscape of tech is crazy, right?

And like, I think just we've looked at like, you know, Scott Breaker has like the Martech, whatever.

It's probably like a hundred thousand right now.

And like now you look at it, like AI has just kicked that one to the curb because there's just so much happening all the time, right?

Like every every week, there's a new AI first, you know, startup that's revolutionizing like this piece of the rev stack.

And when it gets down to it, like,

you know, realistically, like these vendors offer about 95% of the same functionality, right?

And there's like this like, you know, probably like niche five, five percent.

And so to your point, it's like, and I, I'm a tech snob, right?

Like I've got rev tech review.

Like I love looking at like what's new on the market, but like the distraction is real, right?

Like you have to make sure that you are moving forward the right way and like not pivoting off of what you talked about back to like the like we wouldn't have been able to launch in 14 days if we kept trying to like touch everything new that came into the market.

And so for me, it's like before

we like can start something new,

you have to explain like what we're going to stop doing or like what we're going to consolidate to and like really make that case.

Right.

The one thing that i do appreciate about the market right now that i would say like wasn't probably the case three years ago is like most of these places offer free trials right or they have like quick demo accounts like so and this would be my warning to anyone like if you are buying an ai solution or considering buying an aa solution like if you're not running a pilot on it like you're probably going to get hosed.

Like you should full stop, like put your, they've got to put their money where their mouth is when it comes to the tech and like truly be able to pilot it because that is

the only way you're going to truly know if it works for your business and

really be able to evaluate that onto the market.

And so, that's been one of probably like my favorite things about this is like, oh, I can go look at, you know, an N8N or something like that and be like, let me go just give it a try, right?

And like, see, can it do what I need to do?

And like, there's so much more like self-service on that.

I'd say the other big thing, though, that has shifted in like our the way that I buy is like, it's not just the tool, it's now the partner, right?

So, back to like, yeah, we're buying from people, right?

Like, the best vendors are the ones that are going to be like the extension of our team.

They're going to understand our business.

They're going to be responsive when things break.

They're going to, they're going to be builders with you, right?

And like really invest in your success.

Cause like, that's what's going to, in my opinion, it's not going to be the tech that separates like the winners and the losers in this space because like the tech is now table stakes.

It's going to be like the relationship and the business that is going to really like

push this forward, right?

Because like at the end of the day, like if I'm looking at four call, you know, call analytics platforms, there's going to be feature parity and tech parity across all of them.

It's which one's going to be the better partner?

Yep.

Yep.

And it's not going to be the AI agent sales rep that you're talking to, right?

It's going to be the real person that you can get on the phone and actually like have a, have a relationship built with them.

It's going to be the one that flew their implementation team to our office to like help do enablement and configuration and setup and like is the one that wants to be at the table and the one that's pinging you when they're like, hey, look at what our other client built.

You guys should consider doing this.

Like those are, that's how you're going to win in this like vendor landscape.

Like, obviously, you have to have the best tech, which like now is table stakes, which I think before like maybe wasn't always the case, but like now it's like you have to be the best partner.

Yeah, I love that.

So I've talked a lot on this show about overhype with AI, and I think I've dove pretty deep into like the things that maybe aren't actually feasible.

So I want to flip the question around a little bit to you and say, what's underhyped?

Like what's something that you've seen that you've implemented that you've been, you know, hands-on with all this stuff that you're like, I can't believe these other companies aren't actually doing this yet.

Yeah, I'd say like process intelligence is super underhyped right now.

Like we're all focused on like the sexy stuff, like AI writing emails and analyzing calls.

But like, if we can get AI to like actually understand our processes and start to like suggest optimizations there and like even like you know at the top line you know level of like oh hey like you should try this or that right but like where this gets really powerful is like at the rep level of like hey lacey like your deal is stalled in stage three because you didn't send the specific piece of content like that the buyer like could really learn from at this moment and like more of like that recommendation engine at like a more prescriptive level I think goes such a long way and then being able to like actually aggregate that up into like, how do we optimize process?

And like right now, I think a lot of RevOps teams are still doing process analysts manually, right?

Like we're looking at dashboards and trying to spot patterns.

And like, let's be honest, like CRM reporting and like even BI reporting really sucks at this, right?

Like even like, I'm sorry to be crass, but like even Google Analytics, like when you look at like your website journey orchestration, like it still doesn't really give you that prescription and like help you visualize the experience in the right way.

So I think like from my, from my standpoint, like if we can get AI doing like continuous analysis and like recommendations of like patterns and insights, like that's going to help like change the revenue engine.

And like that's where I, I, I see some tools on the market like starting on this, but like, it's like, really, how do we do this at scale and embedded at the level like you need it to?

Yeah.

Oh, I love that.

That's great.

So anyone listening that wants a startup idea, steal that one.

Yeah, take that and build that for me.

Happy to, happy to give you, give you insights.

Yeah.

Molly, this has been fantastic.

I got one thing I'm going to do.

I'm going to throw this over to Rose, who's got our lightning round questions.

This is just a quick round of short questions for you to answer as fast as you can.

And we'll hold you to that.

If you start answering too long, we're going to pause it and make you stay with the lightning round.

Okay, first lightning round question.

What's a common assumption about AI or RevTech that you're actively pushing back on right now?

That AI is going to replace people.

People are obsessed with headcount reduction and cost savings, but like that's missing the point entirely.

Totally.

Okay, go deeper.

I actually want you to go deeper on that, Molly.

Yeah.

So yeah.

So I think like biggest assumption like that I think, you know, I'm not actually fighting this at engine, but I think I see a lot of my peers fighting is like how they use AI to replace, you know, headcount.

Like we're all obsessed with OpEx and headcount reduction and cost savings, but like that's not the point of what we're trying to do here with AI, right?

Like the real value is like, how do you make your existing people more effective, right?

Are customer service teams not smaller because of Eva?

Like, they're handling more complex issues.

They're building deeper relationships.

Our sales teams not getting replaced by AI.

They're closing more deals because they're spending more time on strategy and engagement versus research.

You know, it's like that pushback of like, how many people's roles can we eliminate with AI?

Like, I think the better question here is like, how can we help our people do their best work?

And I think if you frame it that way, like AI is a growth enabler, not a cost-cutting exercise.

So we've got to change that perspective.

Wow, Mike Drop.

That was amazing.

It was not a lightning answer.

No, that was fantastic.

All right.

Number two, when you think about the work you're most proud of this year, what stands out?

Yeah, I think just in the short time that I've been at Engine, just in eight months, right, is just how much we've changed the culture around AI, even knowing like we came into something that was so open to digital transformation, it still was, you know, a little mysterious and like not as like broadly adopted and now like it truly is part of our culture and how we operate and in our daily work right we've got office hours we've got slack channels we have really you know not just implemented AI but like we've democratized it and like we talk about it and we share our wins and we also create an environment where it's really like building like an AI literate organization regardless of role of just being able to let anyone at any role continue to learn more from each other.

That's amazing.

All right.

Number three, what's one mistake or false start you've you've learned from the most from in the past six months?

Assuming adoption would be a lot easier than it actually was, right?

So I thought, like, I thought if we just like showed people what we could do, the value, they'd be like, all in, right?

Like, even if I'm showing you, I'm saving you 10 hours a day, like, that wasn't good enough.

So, just really still investing in the change management.

I think my lesson was like successful AI implementation is like 70% change management, 30% the tech.

So, like, even with the best AI and tech in the world, like, if people don't trust it, you, you're not going to get the adoption.

Trust and change management have been huge overarching themes, I feel like, of our past few interviews.

It's so interesting.

I'm not seeing that as much, at least on my LinkedIn feed.

I feel like it is more sexy, flashy AI conversations or maybe some like fear-mongering.

But yeah, I love that conversation a lot.

I think we're going to see this world where like enablement is going to be such a big, big push and like true, like digital transformation enablement kind of like resources and instructional design and like that type of like management around this is going to be really interesting.

Hey, maybe we'll bring you back for a part two, Molly.

Yeah,

I got to figure that one out because I still suck at the enablement side of things.

All right, last lightning round question.

We ask this to every guest that comes on the show.

What's one experience you've had recently as a customer that left you impressed?

Yeah, absolutely.

So now I'm like obsessed with chatbots, right?

And like it's my favorite thing now that we built Eva to mess with.

And so recently bought shoes from an online retailer.

And like when I contacted support, like they knew my recent purchase history, right?

Like I had already kind of been logged in.

So it's like, oh, yeah, this is the, this is what you bought.

Like, what do you want to do with it?

Right.

Is it like, okay, I need to process a return, right?

So it gave me those prompts because it already knew.

And like, I didn't even have to explain the problem, right?

I didn't have to go in and be like, hey, I bought these shoes.

And it's like, what's your name?

What's this?

Blah, blah, blah.

Right.

It was like, hey, Molly, we know you're here we know you just made this purchase like is there an issue with this purchase do you need to do a return like let's just walk through that they just knew right and i think from like an experience standpoint and a customer expectation it's like i expect you to know right like how did you just make that so easy on me what company is this um it was american eagle actually so which was like super impressive shout out american eagle wow so cool yeah okay well that concludes the lightning round thank you molly yeah all right molly well this has been so much fun where can listeners find you and where can they follow along with Engine's journey as you guys continue to invest more in technology and grow as it sounds like you guys are expanding?

Yeah, find me on LinkedIn.

I'm pretty active there.

And, you know, Engine, we're continuing to share what we're doing on social media as well.

And check out engine.com, especially if you've got a business trip coming up.

We'd love to help you travel.

I'm literally planning my Dream Force trip using engine.com.

So

we got you, Lacey.

Awesome.

All right, Molly.

Thanks so much.

Take care.