The Surprising Future of AI with Fathom’s Founder - Richard White

50m

In this candid and fast-moving episode, Charles sits down with Richard White—founder and CEO of Fathom AI, the top-rated AI note-taking platform on G2 and HubSpot—to unpack the truth behind the AI gold rush. Richard shares why only 5% of internal AI initiatives actually succeed, and what separates innovation from illusion in today’s hype-driven market.

Together, they dig deep into the harsh realities of building and buying AI software—from skyrocketing failure rates and short model lifecycles to the “LLM treadmill” that forces companies to constantly rebuild just to keep up. Richard breaks down why most big corporations are struggling to adapt, how startups can outmaneuver them with speed and focus, and why the future of work will favor the few who learn to “think with AI.”

The conversation stretches beyond business—exploring the coming social upheaval, the rise of one-person billion-dollar companies, and the ethical crossroads of automation, employment, and human creativity. Both Charles and Richard keep it honest, funny, and forward-looking as they challenge listeners to rethink what it means to lead, learn, and stay relevant in the AI age.

This isn’t just another talk about artificial intelligence—it’s a survival guide for entrepreneurs, employees, and visionaries navigating the most disruptive technological shift since fire itself.

KEY TAKEAWAYS:
-Why the future belongs to those who learn to think with AI, not compete against it
-How automation is paving the way for one-person billion-dollar companies
-The ethical and human implications of an AI-driven economy—and how to stay grounded amid disruption
-The mindset shifts needed to stay relevant, creative, and adaptable in the AI age

Head over to provenpodcast.com to download your exclusive companion guide, designed to guide you step-by-step in implementing the strategies revealed in this episode.

KEY POINTS:
01:18 – The 5% reality check:
Richard opens up about why 95% of internal AI initiatives fail—while Charles unpacks what separates companies that truly innovate from those just chasing the hype.
04:42 – From bootstrapping to breakthrough:
Richard shares the journey of building Fathom AI into a top-rated platform—while Charles highlights the timeless lessons in execution, focus, and product-market fit.
08:15 – The LLM treadmill explained:
Richard reveals how rapid model updates force companies to constantly rebuild—while Charles reflects on why adaptability is now the ultimate competitive edge.
13:28 – The illusion of enterprise AI:
Richard breaks down why corporate AI projects struggle to deliver results—while Charles explores how small, agile teams can move faster and smarter.
20:04 – Thinking with AI, not competing against it:
Richard discusses how humans and AI can complement each other—while Charles reframes the idea of “AI replacement” into one of “AI augmentation.”
26:51 – The one-person billion-dollar company:
Richard predicts a future where automation and leverage allow individuals to achieve massive scale—while Charles examines what this means for the workforce and leadership.
34:22 – Ethics, disruption, and the human future:
Richard warns about the social impact of AI’s rapid acceleration—while Charles challenges listeners to shape technology with purpose, empathy, and accountability.
41:10 – Staying relevant in the age of acceleration:
Richard closes by sharing how curiosity and lifelong learning keep innovators ahead—while Charles reminds us that the real advantage isn’t AI itself—it’s how you use it.

Listen and follow along

Transcript

Welcome to the Proven Podcast, where it doesn't matter what you think, only what you can prove.

Richard proved it in a time where everyone's trying to be successful at AI and they're rushing around.

He did it five years ago.

He's the CEO and founder of Fathom.

He's also a really great guy until he starts telling you the unforgiving truth of what's actually going to happen with AI in the next 24 months.

It's terrifying.

Anyway, I hope you enjoy it.

The show starts now.

Hey, everybody, welcome back.

I am excited to have you on the show, Richard.

Thank you so much for joining us.

Hey, thanks for having me.

So, so for the four of my people who don't know who you are can you explain what you've done what your success has been sure uh i'm the founder and ceo over here at fathom ai uh we are the number one ai note taker on g2 and hubspot no one likes taking notes on their meetings and so we have uh basically an ai that will join your meeting record it transcribe it summarize it write the notes write the action items fill in your crm you know slack it to you email it to you you name it so that you can just focus on your conversations and not doing a bunch of kind of data entry work

so So I think most people are familiar with your product.

I think the stuff we're going to talk about now is stuff that people aren't familiar with, about the reality of AI.

A lot of people think AI means artificial intelligence.

It also means always incorrect.

There's also a side of this that you believe about what it means for you as well and some of the harsh realities of what AI does.

Can you kind of share what some of those harsh realities are?

Yeah, I mean, I think one of the things, you know, I've been doing software for 20 years and

AI has completely upended how we think about building software.

Yes.

It's made it much more of like an R D process now, whereas before it was more of like a manufacturing process.

It's also made the failure rates much higher, right?

Like, you know, we, it takes a long time to sometimes ship an AI feature because it'll fail three times before you get something to work.

Um, and so that exists for both when we're building features for our product.

It also exists like when we're trying to buy AI products to basically, you know,

move our business forward.

We actually have a goal at Fathom of getting to 100 million in revenue while staying below 150 employees.

And And so we have this big emphasis on efficiency and automation.

And it's interesting because we had this, you know, I just gave this talk where I expected Dave to talk about how, you know, we've transformed everything with AI and we actually have like a 60% failure rate on an AI initiative.

So I think there's a lot of really interesting gotchas when you're trying to build or deploy AI solutions.

So what you're trying to tell me is that AI isn't the holy grail.

All of a sudden, I'm not going to start floating and curing cancer because I was bored on the toilet one day.

That's not how things actually work.

Damn you, man.

You've ruined it all for us forever.

I'm so sorry.

So as we go into these, and you're talking about failure rate, what do you mean by failures?

I mean, at 60%,

that's, I mean, I wouldn't get on a plane that had a 60% failure.

I mean, I would get married because that's a 62% failure ratio.

But okay, I will get on a plane that has a 62% failure.

What do you mean there's a 60% failure ratio in AI?

I mean, so actually there's MIT study that just came out and said like the average company right now is actually a 95% failure rate on like AI initiatives.

What I mean for us is basically like it produced the outcome we wanted.

And I think that's actually the hardest part is like in the AI land, it's easy to get it to produce something.

It's easy to get the AI to spit out something.

Right.

Hard part is getting it to spit out the right thing.

And what is the right thing?

So for example, in our business, like, you know, I could, you could build an AI that gives you an accurate summary of a meeting that's six pages long, but accurate may not be enough.

Like that's too verbose.

I don't want a six page, you know, it's a 10 minute meeting.

I don't want six pages.

So there's like this whole new nuance of like quality that I think is hard for us to adjudge.

We're not used to judging it, right?

We're used to software as binary.

It works or it doesn't.

I click the button, the thing moves on the screen, right?

And now we're in this world where like I click the button and it spits out some words.

I'm like, are those the right words or not?

Right.

It makes a judgment call.

Is that the right judgment call or not?

And so I think one of the things that's really changing everything is we have to rethink how we evaluate tools because we have to actually get in there.

And it's almost like evaluating a hire, right?

It's more like a buyer because you're basically buying thinking, thinking, not features now.

And so it kind of upended how we think about purchasing products.

So I can't even get ChatGPT not to put dashes in the damn responses that it gives me, which I can't tell you how many cursing that I've done at that thing.

You're talking something significantly higher.

How do we get it to produce content that we actually want or go from that 10-page, you know, dissertation that's so verbose into what we want?

How do we do that at the home level for the everyday consumer?

And then also, you know, as the CEO is a very successful company, because every single meeting I'm in, your damn software is there before anyone else.

Joins, thanks for that.

I'm a little angry about you with that one.

How do we do that at both the personal level and the professional level?

Yeah, I mean, it's actually that same study said that like the success rate for things like ChatGPT is like actually 40%, which is still not great, but way higher than 5%, right?

And I actually think I think AI is actually easier for individuals to use because individuals are basically taking ownership of that output, right?

Like it's like, oh, oh, it's writing this email for me.

And yeah, I hate that it always puts the end dashes in there too, but I can at least remove them.

Where it becomes problematic is when we're using these things at scale and no one's basically been properly equipped to QA the thing.

You know, we have a whole team at Fathom that all they do all day is play what I call kind of like a, you know, AI version of Jenga, where we think about this as like all day we are experimenting with like, you know, basically models and use cases, right?

And does this model good at this use case?

Can this model find action items from a transcript?

And I call it Jenga because if you push on a block and it, you know, it gives any resistance, you give up.

You find another block that moves smoothly, right?

Because there's like weird kind of like problem you've got now where you've got so many models with differing kind of performance parameters, cost parameters, and so many different things you want to do.

So it's like this really big problem where you don't need a full-time team if you're building stuff.

Either you're building it or you're evaluating it to like, you know, evaluate multiple multiple vendors in parallel and try, okay, we're going to hire three vendors.

We're going to put each of them on a 90-day pilot, which by the way, we make every vendor give us a 90-day pilot for AI.

We're going to have a whole team that QAs it.

And when we don't do that, it almost never works.

So when, when new GPTs or new models come out, there are so many times where I personally, I've spent so much time training my old model.

and trying to teach it and say, hey, do this, do that.

And I have very specific calls for it to do that.

When a new one comes out, do you guys over at Fathom have the same pucker emotion that we have on our side?

Like, oh, God, everything's about to blow up again?

Is that something you guys are facing as well?

Yeah, on two dimensions.

Well, I mean, one, we get excited because usually the new models unlock something for us, right?

For example, GPT-5 for the lackluster kind of reaction it got from the market did actually solve a significant problem for us.

Hallucination rates are way down.

And that actually opens up a whole new class of problems that we were trying to solve before, but couldn't.

But it causes also other problems in that none of these models are forward compatible, right?

You get something working on GPT-4.

It's not necessarily going to work the same on GPT-5.

And even more problematically, and I think this is something that everyone in the industry is starting to realize, the EOL cycles on these LLMs is now measured in months.

So Anthropic puts out Sonnet 3.5.

They put out six months later Sonnet 3.7.

Sonnet 3.7 is more powerful, but now there's a limited number of GPU compute in the world, right?

And so they're shifting all of their compute to this new model.

So now you end up on what we call the LLM treadmill, where if you don't upgrade your models, all of a sudden you find out you're getting all these errors because there's no compute to service them.

And so now you're spending as much time upgrading your models as you are basically building new stuff from scratch.

So the maintenance load on these, on, on these tools and processes is way higher than anything you've ever seen in software on.

It's one of those things, and I'm going to date myself here, but the original Warcraft 2, because I'm that old, before, okay, so I can see by the smile you played it, before I would go and I would attack the orcs or I would attack the Knights or whatever it was, I would save my military formation.

Okay, if this doesn't go well, I'm going to go attack.

If they all die, I can just go back again.

I wish that existed inside ChatGPT where, or any GPT that were going, we're like, okay.

You can try and do this.

Quit giving me dashes or I want you to wear it this way.

And then for some reason, AI becomes always incorrect and it just goes off on a tangent.

I'm like, excuse me, sir.

Can you just go back 30 seconds?

That would be nice.

And then it sounds like what you're saying is, hey, I did this great game.

Can I pick it up and drop it in over here as well?

And it seems like both of those things are absent in the market, even at the highest levels, which is where you are.

Yeah, that's right.

I mean, I think a lot of advice I give to companies is like, if you can, try to solve a problem with building something in-house.

But know that that in-house solution has like six to nine months of shelf life and know you're going to throw it away and probably buy a vendor at some point.

Right.

But by building it in-house, you have a better sense of like, cool, we at least know we got it to do the one small critical thing we needed to do, right?

A lot of vendors throw a lot of things at you, right?

We have 10 different features and two out of 10 of them work sort of thing.

But yeah, this is, it's kind of like this whole new, again, this whole new paradigm.

It's very much an R D lab.

It's very much a, not an assembly line, right?

It's like, it's not as predictable as what we had before this in SaaS.

I wish this was something new in a sense of tech, because I remember, because I'm old enough to remember when, you know, we had the dot-com boom and everything was going on with the internet.

We're like, oh, this is going to be amazing.

And then, you know, pests.com is going to be amazing.

And this is going to be amazing.

And obviously just blew up all the time.

So not just on the personal level, but professional level, companies you thought would be Fortune 500 companies and are going to be there forever would be gone two, three weeks after.

Are you seeing that with established companies are sitting there going, oh shit, we've got, you know, there's a, the light of the tunnel is not a light.

It's a train.

We got to adjust because what worked today just won't even exist.

And how short is that timeframe?

I mean, I think the exciting thing as an entrepreneur right now is that a lot of the big companies are really struggling to release good AI features because it breaks their paradigm of how to do software, right?

They're using this assembly line where it's like, how do we build software?

We say we want to build this feature.

We spec it out.

We build it for three months.

And then, oh, you know, we click the button and it moves, you know, 10 pixels to the left.

We're done.

Right.

And it requires a whole new way of doing QA that most of these companies are good at at doing, which is why I think if you look at most of the big new AI features from a lot of these companies, they're really mediocre, right?

Like, because they just don't know, they don't have the muscle in the company, which is what is quality, right?

They don't know how to judge basically like subjective quality.

And they're still looking at it from their kind of like objective, like, did it do the thing?

Did it spit out words?

Yes, great.

Pass QA, ship it sort of thing.

So I actually think there's a challenge you're buying software because a lot of times the bigger incumbents actually have inferior products to the new startups.

New startups have their own problems, right?

Of like, you know, instability and whatnot.

But if you're an entrepreneur, I actually think it's a fantastic time because it's like the incumbents are completely out of their depth in how to build software in this new era.

And so I think it's exciting, actually, as much as it is also terrifying.

Yeah, I think the best example I've heard of this is imagine you're on a train that's going as fast as possible and you're on one car of the train and you're fixing as much as you can, but all of a sudden it's going to unlock and that car is going to be gone.

So you better jump or good luck.

I wish you nothing but the best because that's just the reality that we're going to be in.

So as someone who's kind of tip of the spirit, who has been become very successful with what you're doing and has created a company that, as much as I do hate your thing showing up to all the meetings, is something that everyone uses.

Where do you see AI?

Because everyone's like, oh my God, it's the greatest thing since fire.

And there's other people who are like, oh, my God, it is fire.

It's going to burn down my house.

There seems to be people who are very polar opposite.

Either you're completely madly in love with AI or, oh, my God, it's the the devil incarnated.

And they have this paradigm shift.

Where do you see it going?

Since you are, again, you're in there, you're with the CEOs, you know what's going on better than even someone, the regular person would be.

How does this look in five years?

Yeah.

I mean, the one thing I will say is this is to me the greatest technological shift of my lifetime.

Bigger than, there's no way bigger than mobile, bigger than social.

You know, I don't seem to say bigger than the internet itself, right?

Like there is real

there, right?

Like for all the failure rates and stuff like that, they're also, the denominator is huge, right?

Everyone's trying to, because this is the closest thing I've seen to magic.

Um, one of the challenges with this is like, yeah, what is, you know, I have board meetings and we're kind of talking about like, what, you know, what's our five-year plan?

What's our 10-year plan?

I don't know.

If you get to AGI in five years, does anything really matter?

Can you really plan beyond AGI type things?

Smarter people than I, I think the real question is kind of the open question right now on the market is,

you know, my kind of core friend group, the same folks that I kind of leaned on five years ago before Gen AI got good to make me feel confident building a business, betting on Gen AI, getting really good.

It's kind of like we started this company in 2020, 2021, we launched and we put AI in the name of the product and all my investors are like, what are you doing?

Everyone hates AI.

And it's easy to forget that it was only four years where AI was being marketed in 2015, 2016, 2017, and it was terrible, right?

It was not AI.

It was, you know, it was, it was basically fraudulent kind of stuff.

But now we're at this point where everyone's like, oh my God, AGI is going to happen in two years.

And, you know, there's some people who still believe that we're going to keep accelerating.

I think that group of people that I'm kind of surrounded with thinks it's about 50-50 between like, we're going to reach a plateau of what you can do with the current tech and we're going to find kind of more voluntele the next, you know, step up.

It's clear that we're getting diminishing returns from the current generation of transparent base AI, like GPT-5.

I think everyone kind of sees all the latest models are now more optimized for efficiency.

They're not like wildly smarter than the previous model, but they're cheaper to run, which is important.

Companies running,

you know, for their margins and all that sort of stuff.

I've kind of taken the approach where we have to kind of assume that things are going to kind of slow down because we assume they're going to continue to accelerate.

It's almost impossible to plan for, anyways.

So,

and we're kind of, again, I think GB5 was one of those was a good data point of like, okay, like it seems they were plotting towards a plateau and we're waiting for whatever the next thing is after transformer models alone.

Um, But it is the most volatile market I could ever imagine, right?

Like, you know, we've, we've been on by, this company's been our objectively a rocket ship by the last 10-year standards.

And we're now just doing pretty good by modern standards where you see companies go from zero to 100 million, a billion in revenue in two years, right?

It's and then go back down to zero two years later, right?

Like look at the jargon and stuff like that.

So it is an insanely volatile market, full of tons of opportunity, but how long lived those opportunities are, I think, is to be seen.

Yeah, I think to your point of what does this mean to the human race,

I will give a little bit of pushback.

I don't think it's better than the internet.

I don't think it's better than Industrial Revolution.

I don't think it's better than the only thing better than this is fire.

That's as far as the human race is concerned.

This is this is fire, as far as what could do it.

Now, fire was good and bad.

It could burn down your entire village, yes, but it also makes good food.

We could, you know, do these things.

As far as what I'm concerned, what I've seen with it, AI is as good as fire.

Now, what that means going forward,

good luck.

I wish you nothing but the best because it's going to be pretty interesting.

You mentioned there's companies that go from zero to a billion-dollar valuation and then two weeks later, gone.

Do you think we're going to see in our lifetime the first hundred million dollar company run with just a single employee?

Do you think that's going to happen?

Yeah, I mean, I think Sam Altman talks about the first billion-dollar company with a single person, right?

I think that's highly possible.

And then you can extrapolate all the concerns you have about like societal upheaval and wealth inequality and from that

pretty easily.

Right.

But yeah, no, I think that's perfectly reasonable to expect.

Yeah.

And I think what it, and this is something that people don't understand, this is no longer a luxury.

We don't get to sit back and say, hey, I wonder if this is going to happen.

I wonder if this is going to affect me.

This is going to create wealth distribution issues on the equivalent of basically India.

When you look at how people are distributed, especially here in the United States, you're going to see that.

So if for those of you who are playing at home who might not understand everything that Richard's talking about and what he's doing, you do not have the luxury to sit on the sideline.

So either you're going to be panhandling or you're going to embrace AI because this is just, this is what it is.

This is electricity.

So if someone's walking into that and they're like, oh my God, this is terrifying.

You know, you're telling me that, hey, I need to embrace it, but then you're telling me the company's going to disappear in five months.

When you're an entrepreneur, you're like, oh, God, I have to go into this.

I know I have to go into this, but I could get punched in the face or I most likely will.

How do you advise entrepreneurs?

How do you advise business owners?

And hey, these are some proven tactics that work.

Let's, let's do these.

Just do these for now.

Make sure that if you do get knocked on your butt, you can get back up somewhat gently and go from there.

What are the things you advise with?

I mean, honestly, I think it's never, there's never been a better time to start something that's really narrowly focused, right?

You, you hear a lot about the big, you know, the big platforms that are, you know, again, going from zero, like a Jasper or going to zero to 100 million and right back down.

But the real beauty of this stuff is like, it can get, you can really tailor the stuff to specific use cases, specific problems, and you can build faster and cheaper and better than you ever have before, right?

It's completely up here.

You can, you don't have to have a CS degree like I have and a team of six engineers anymore to build something useful.

You can just be a pretty good, you know, hobbyist prompt engineer, plus some magic patterns and some, some prototyping tools, and you can build something of value, right?

And so, you know, I remember 10, 15 years ago, everyone was kind of doing like the, you know, the, was it the lean startup stuff where they're like, oh, you know, they're selling stuff before they really even built it.

And, you know, that got taken to an extreme.

But now you literally can really narrow down and find a very specific niche and you can build a really good, like, and I know this is kind of a pejorative of a lot of markets, but like lifestyle business out of like, great, I've got the best new software that solves this one burning problem for car washes, right?

Like, yes.

And I actually think that's where a lot of the gold is.

I actually think a lot of the gold is at the application layer.

A lot of the investment and noise and all this stuff is all kind of at the like foundational layer.

It's all about who's building the big infrastructure stuff.

But that's a, that's a, you know, billionaires game.

You need a lot of money up front to do that.

I think there's a lot of money to be made at the application layer sitting on top of these tools.

And if you can get good at bringing them, and that's where I think, I think that person that's going to be the, you know, the single person company doing 100 million revenue, I don't think there's going to be a foundational model.

I don't think they're going to be something like Fathom.

I think they're going to be something that sits above something like Fathom, right, or above these

foundational models, right?

It just finds a really good niche that just happens to catch like wildfire.

So I think that's for the entrepreneurs.

I think for the employees, there needs to be this conversation of what's happening because you're seeing in their orbs, you're seeing people where entire divisions are getting eradicated, people who with master's degrees from Ivy League schools are trying to get jobs at McDonald's right now.

And they're terrified.

As I rightfully think they should be.

This welcome to this new world.

When we were,

me, when I was younger, being an entrepreneur was not sexy.

They did not like that idea.

Being into comic books, not sexy.

Being a dork, not sexy.

And then all of a sudden, now we're like, it's our time.

Our time has come.

And same thing with entrepreneurs.

The employees that I know are terrified.

They are, they are fundamentally, they're like, hey, and they go back to their old model, which is I'm going to go into another degree.

I'm like, that's not going to help you.

That's over.

Those times are gone.

So what do you say to those, you know, mid-level, medium management, you know, mid-level managers, kind of just, you know, senior directors, VPs?

What do you say to those guys who have said, like, I built and I have, you know, busted my butt to fit into this model of this process, of this American dream.

And as George Carlin said really well, he goes, it's called the American Dream because you have to be asleep to believe it.

If you no longer believe this model and you, you are no longer built for this and the thing you were built for does not exist anymore.

How do you adapt?

Yeah, I mean, that is the question, like that will be the question of the next five, 10 years, right?

I remember, you know, I was a big proponent of like, I was telling everyone who listened about UBI 10 years ago, and I was worried about truck drivers back then, right?

I was like, truck driving number one profession in like 30 or 40 states, right?

And it's going to, you know, it's going to go away soon.

And it's kind of funny.

It's really hard to predict these things.

I think everyone would be assured that that was the first thing, the first kind of like industry gets 10 years ago.

Here we are, it's 2025.

And actually, it's no, it's artists, it's copywriters, it's pretty soon going to be lawyers, middle-level management.

It's all knowledge work.

Therapists.

Yeah.

Yeah, exactly.

And so,

so, you know, what would I say?

You know, honestly, it's like, it's, there are no, I would tell you that, like, your fear is well-founded, first of all, right?

Like, and unfortunately, like, I'd love to sit here and tell you that you've got nothing to worry about, but I think you do, right?

You know, I think what you're seeing when you look at what's happening at, you know, college enrollment is down, trade school enrollment is up.

And I think like, you know, the people that are kind of solving this first principles, the folks coming out of high school are looking at that and saying, gosh, you know, never been a better time to be in the trades.

Now, am I going to tell some VP to like, hey, you know, you should go back to community college and become a, you know, a plumber?

You know, I think that's a tough sell too.

I think there's like a middle ground where if you really become a student of this stuff, I still think there's a lot of opportunities the next couple of years, again, at the application layer where you could be the person that helps companies.

get from a 5% success rate that we're seeing to a 25% success rate.

And there will be a lot of opportunities there.

I think it really depends a lot where you are in your career.

I mean, I, you know, I've been building software for 20 years and I've always thought that like, you know, I can always fall back.

I know how to organize people to build great software.

I'm not sure that'll even be a skill set in five years, right?

That's correct.

You know, I very much plan like, if I don't have kind of an exit or retirement plan over the next five, 10 years, we need to be thinking about what we can, what value we can provide beyond that.

But I do think very tangibly, I think trades will be coming back in a big way.

I think, you know, there's a lot of opportunity for people to learn how to become experts.

You can be an expert in replacing your own job with AI.

That gives you a job over the next couple of years.

So, you know, we talked about entrepreneurs.

We've talked about employees.

We talked about where we think this is going and how this is the new fire.

What are some of the conversations that none of us are having?

Let me phrase this.

None of us other than you are having in these boardrooms with these people who are very much at tip of the spear.

What are the things that you guys haven't made as public?

yet, if you can, this is, hey, this is what we're talking about.

And these are the things that keep up us at night, because we know what keeps the entrepreneurs up at night.

We know what keeps the employees up at night.

Here are us as, you know, founders.

This is what keeps us up at night as well.

I mean, I think,

you know, the, I think the boardroom conversations are more about like pace of AI change and kind of like, you know, how quickly will, you know, it used to be a build software company and usually had at least 10 years before someone really disrupted you.

And then now, you know, pretty, and it started now, it's like five years, pretty soon it'd be two years, where it's like, there's so much technological change that just undoes, you know, if you valuations for SaaS businesses, you look at them today versus five years ago, dramatic,

right?

So I think there's a boardroom, I think, is a lot of conversation about that again, about like AGI and like, what would that mean?

Could that just, you know, render a lot of businesses irrelevant?

I think the conversation we should be having is the one we're kind of tiptoeing around, which is like, how do we as a society handle this?

There's a really good short book called Mana, M-A-N-N-A, by this guy, Marshall Brain.

Do you remember howstuffworks.com?

Awesome website.

The guy's actually from my hometown of Raleigh, North Carolina.

He wrote this like 25-page book, and it was kind of a tale of two cities.

One city actually set in the US that was like dystopian AI future, where like the robots are in the ears of the humans, telling them exactly what, you know, walk 10 steps this way, turn over the burger, that sort of thing.

And another city where it's like, oh, no, a lot of the gains from AI are more shared links to society.

And it's like, it's a little hyperbolic, right?

But I think really interesting thought experiment of like,

this is coming and which, you know, I don't know that we'll get as dystopian as one example or as utopian as the other, but I don't think we're, I think we're, everyone's busy fighting trying to put the genie back in the bottle.

The genie's not going back in the bottle.

We need to talk about what's, where do we want to like put guardrails and push the genie in one way or another, right?

Like, and so

I think the other thing people were talking about is also candidly like AI regulation.

The other thing I would say is like,

you know, I think a lot of folks in tech land voted for Trump.

And one of the reasons they voted for Trump is because he wouldn't regulate AI.

And a lot of folks see that basically there's an arms race between us and China around AI.

And there's this belief, right or wrong, that if China gets to AGI first,

if you believe in Western-style democracy, bad things happen, right?

And so I think that's another, there's like, you know, kind of so many different levels to this upheaval, but those are the three I would think about.

So where do you think things are going?

Because people do have this dystopian fear that all of a sudden it's going to be Terminator, right?

You're going to have the day it cuts over, and then the robots are going to take us over and turn us into cottage cheese.

Where do you think, and what's more realistic for that?

I think all the paths are still open.

I, you know, I don't, you know, I think it's not the answer I wanted to hear, but okay.

Yeah, I just peed on myself a little bit there.

I think all, you know,

I think we would be foolish.

I think there's a lot of folks in AI land that are concerned about AI safety.

Like a lot of the, you know, a lot of the kind of open revolt that they had at Open AI a year ago was about this fear that like this thing was founded on the premise of AI safety and it seems to have gotten off that mission sort of thing.

So a lot of people way smarter than me seem to be very concerned with that.

And so I think, you know, don't want to be alarmist, but I think we should all be like alive to the danger.

This feels like a critical moment in

human civilization.

And everyone needs to educate themselves a little bit and do what they can to make sure we're nudging ourselves in the right direction.

So for all of you who have just caught the podcast, we've decided that we're all going to die.

We're all going to be out of jobs and it's completely over and it's a horrible time to be like, okay, so let's try to give people a little bit more hope about what's done.

So there's a lot of conversation about.

what AI can do and what AI has done.

And not only just the basic stuff with business, but what's been done medically.

Like, hey, we've made XYZ discoveries and we've pushed the envelope with that.

And hey, how we've looked at a problem that couldn't have been solved by humans for 100 years.

It's in 27 seconds.

So there are some amazing things with AI.

Can you kind of share some of your favorite ones that, you know, you've seen that have kind of personally ever been like, oh my God, I can't believe it just did that or it figured out that.

I mean, I think you just touched on the big one, just like a lot of the stuff you're seeing happening in healthcare, right?

Where like we things that used to be really expensive, right?

Like analyzing scans, the early detection, like our healthcare system, my father was in emergency medicine for 30 years.

He was the first one to tell you we are really reactionary in healthcare

for a number of reasons.

But first and foremost, it's like it's very expensive to be basically proactive in healthcare because, you know, someone's got to analyze the scans.

They got to look at these blood markers.

They got to do all these things.

Both on the like kind of like the preventative maintenance, preventive medicine stuff, as well as research.

And this is going to drive down the cost of all that stuff dramatically to the point where, you know, you don't have to be rich to get kind of life-extending care well ahead of some acute medical crisis.

I think there's a lot of, I think that's going to be the thing we're going to look back at and be like, wow, we're going to cure, you know, hopefully cure or greatly reduce the harm on a lot of diseases in a very short period of time.

But it's going to be kind of the wild west in the meantime, because also our medical regulations haven't really caught up with that, right?

We don't have to learn how to handle it.

But I think that's probably one area you can look at in point point to and be like, there's gonna be a lot of good done there.

I think, you know, for all the disruption that we're gonna see with self-driving cars, that's also gonna be a place we're gonna point to, right?

Like, you know, the number one cause of death of people, the number one use of like urban land, like you think about housing affordability, think about what happened when you don't have to dedicate, you know, 40% of your city to parking.

Think about what happened when, you know, people aren't getting in car accidents left, right, and center.

So I think there's going to be, you know, on the other side of this crucible, there are a lot of things that we will look forward to.

And the same way you look at the same thing with like the Industrial Revolution and stuff like that, there were a lot of painful things in that transition.

A lot of terrible things happened.

Humanity was better for that transition in the end, right?

But it will be I don't think we have to go as far back as the Industrial Revolution either.

Even with the IT boom, when technology kicked in, people are like, oh my God, these are going to wipe out jobs.

Yeah, they did.

When tech rolled out, when we had the dot-com boom and everything took off the internet, it wiped out walls of jobs.

But the job that you have right now did not exist before that.

The jobs that I did, the careers I had.

So yes, it will wipe out a ton of shit.

It will also create a ton.

So I think there is that.

I think to your medical point, you know, there's a difference between our DNA and DNA.

There's different with that, how we measure those things.

Some things don't change because even if you die of cancer, your DNA, that's your DNA.

But the other stuff we can analyze and say, hey, you know what?

We say that everyone should take these medicines.

However, based off your stuff, your individualized goodies, you should be taking this.

I was sitting with the CEO, one of the companies that does that.

We broke down.

He's like, Yeah, let's run your blood work.

And within a day, he's like, Okay, this is what you need to stop eating right now.

I was like, I'm sorry, what?

He's like, Yeah, you're, I'm like, Yeah, but that's supposed to be healthy.

He's like, Yeah, for everyone but you, don't eat that.

Uh, he regrettably did not say that I could have ice cream every day.

So I'm still mad at him, but that's for there's a little, I was like, Well, I can't have ice cream every day.

What the hell?

So, there is that.

So, we get it.

We understand, I think, you know, for every single level that we're at, be it employee, entrepreneur, founder, there is this optimism and there's also a little bit of fear.

So as we get through that, I think having the tools and the techniques right now, like, what are the things, what are the tools that you're using?

Other than obviously everyone needs to use your software.

I get it.

Please stop using on my meetings, you bastards.

So everyone needs to use their software.

What are some of the tools that you use every day?

And how do you use them differently than everyone else?

I mean, you know, I think that I think everyone thinks that in Silicon Valley, we have a whole different set of tools than everyone else.

We actually actually don't.

I think there's, you know,

I'm done.

We're all using, you know, we're using ChatGPT.

We're using things like magic patterns, another one I love, where it's like an easy way to kind of just like, you can basically, it's a, it's an AI for generating prototypes.

Like you want to use your interface or something.

So like we build a lot of prototype tools.

You know, at a, at the, at the high end, I think the, the secret is to actually build good products with AI, you actually end up using multiple models.

Like any feature in Fathom, whether it's generating meeting summaries or finding action items or, you know, answering questions based on transcripts, there's a pipeline.

And we're using three or four different models from different providers in that pipeline.

We use, you know, some from Gemini, some are Anthropic, we use some self-hosted ones.

So at the high end, when you're actually building really sophisticated stuff and trying to build, you know, the highest quality AI, right, and take it to market, it's a whole different game.

But an individual, frankly, I don't think there's a lot of, I don't,

there's actually, I think, there's so much word of mouth adoption of these tools, right?

It's why all these tools go from zero to 100 million so fast because they're so good that there aren't a lot of like secret tools that people are using, right?

It is a lot of fun, Chat GPT,

you know, make, you know, et cetera, et cetera.

Yeah, I think the other thing that's really important is when you pick a new tool, you have to understand what you used to do, how you used to operate.

will also have to change.

And the simplest example I can give this is we gave very specific PowerPoint presentations that looked in a very specific way and a very specific class level for that.

That took a ton of time.

We used a tool called, and again, I don't do sponsorships or affiliates.

I refuse to do it.

So this is a death.

We used a tool called Gamma.

And my team got a hold of it.

And they're like, I was like, okay, this looks completely different.

They're like, yeah, but we created 300 slides in a week versus a month and a half.

And I was like, okay, I guess our slides now look different.

So having that

adaptability was vitally important.

What are some of the ones that you've used that you're like, hey, okay, yes, I used to do it like this.

It doesn't work anymore.

That's hilarious.

That was actually the example I was going to give, right?

Which is like,

oh, I want my slides to look like this.

Gamma's great getting slides out.

Are they going to be exactly what I had before?

No.

No.

But that's the thing.

It's like, you know, it's like using Chat GPG and Google search.

Is it exactly what I got out of the search results?

No, it's actually better, but you have to be flexible and we like, rethink, what do I actually need out of this tool?

Right.

Yeah.

Gamma would be the same example one I do, right?

Like I love creating, I honestly, I spend, I do kind of waste more time now and there generating fun AI images from it.

I do too.

I'm glad you brought it out because I didn't want to be the first shameful one to say that.

I spend way too long in there just messing with the images because it's fun.

I'm like, wee!

Yeah, I'm a, I'm a, I put an image in two words on the slide kind of guy.

And our branding, uh, actually, we just rebranded and we put astronauts in it.

It's probably reason put astronauts in it is because I had so much fun in every deck we've had for the last nine months i've got astronauts fencing on the moon astronauts fighting monsters astronauts doing math with their helmets on like it's i love it right yeah yeah not to be fun is not to be discounted in the workplace it's worth doing no it's still got to be fun out there and get that rock and rolling i'm glad that you you stepped up and said that you too are a dork like me so i appreciate that you you step into that world for me so when people are sitting in there and they're looking at this one of the things that they're concerned with is you know if i go to google and i type in you know what's the best food in my city i'm going to get thousands of answers with chat gpt i'm going to get one right people are a little afraid of that like okay we're now getting it so i don't have the option to think on my own i'm now being told and i've now had the data so synthesized down to this one thing is that something you're concerned with as well because if i go to the library and there's one book on history i know i'm missing a lot yeah i mean there's a big concern about like you know we've already had this kind of bifurcation, I feel, like, of what reality or truth is in America to a certain degree, right?

Well, what do you mean?

We're not going to get into that, but we'll, um, but uh, yeah,

it is interesting.

Like,

for as much as you had to be get things right, there's certain corner cases where it's really, really bad.

Right.

You know, I think my girlfriend the other day was like, oh, looking up some place that would like, I think, you know, sew something for her.

Right.

Oh, I need like a, and it gave her three answers and all of them were completely made up.

Like,

and

I mean, that one's at least easy to spot because you can easily verify, like, oh, that's on a real place.

But it is a little scary because we are outsourcing judgment, not like

why we like it is because we're outsourcing judgment, right?

Because who wants to go through a thousand restaurant recommendations, right?

I just want 23.

Help me pick three.

But yeah, we're outsourcing judgment to this AI.

And that's why I think, again, I'm grateful at least that there are reasonable competitors.

And it does seem to be that there isn't as much moat in building foundational models as we thought.

Now, there's a ton of moat in that, like, you know, from a consumer brand perspective, Chat GPT has 98% of the market.

But I would encourage people to like get a second, you know, whether it's Gemini, whether it's Clog, like,

you know, when you're skeptical, ask us, get a second source, Grok, you name it, right?

Like, I think all the smart people generally are diversifying, you know, I don't just, I don't rely on one

lamb to answer the question, right?

For that very reason.

I also think if you are kind of trapped in one ecosystem by your own choice, because no one traps it to this point, and to your point with, you know, the girlfriend asking for place for selling, I'm like, yeah, okay, Shimuko, now go check Yelp and compare your options.

You're going to get it.

So having that cross-reference is important.

It's one of the things that I've coded into mine, which is one of the things I love about GPT so much.

I'm like, okay, if you give me an answer like this, always do this after.

And outside of dashes, it seems to resolve.

But I think everyone I know will just celebrate so much when the damn dashes and emojis are no longer included.

Stop it.

No one writes like that.

That doesn't sound like a human.

What is wrong with you?

So if anyone, by the way, on a side note who's listening to this knows how to get rid of the dashes permanently, please send me a message.

I will pay you for it.

It drives me out of my mind.

So with that done, open AI actually don't even know how to get rid of the M dashes.

I think I read something that they're aware of this.

They're like, we're not sure how this got in there.

It feels like it's the AI's fingerprint sort of thing.

I don't know.

It really is.

So, and it's funny because I will sit there and I will tell it over and over and over and over and over.

And now I'm just like, I'm dashless because it's just like I can't teach it.

So when people are like, oh, AI is so intelligent, it can learn.

And it's, no, you can't even get rid of dashes right now.

So just breathe here, sweetie.

So for those of you who are sitting there and okay, we've got tools, we've got adaptive.

When those are going through and we talk about what's next for you.

Not in five years, but what's the immediate next 90 days for, again, you're, you're kind of tip of the the sphere with what you're doing over at Fathom.

What is the next 90 days for you having conversation with your staff?

Because you have to lead differently because now we're in an AI age.

How do you lead differently?

How do you show up differently in that environment?

How do you build 90-day plans?

Because anything beyond that, you're, yeah, come on, we don't know.

I mean, you know,

we've been fortunate in some ways.

This kind of has come back to our strength.

Even from the beginning, this company have always been like, we only build 90-day plans.

I actually think that, I think in a lot of companies, planning is this like art of self-deception and like like false prediction, right?

Where it's like you're using technology, even before AI, you really can't know exactly where you're going to be in a year.

And so I think it's important to have hypotheses about the future, right?

We believe the future looks like this and not this, right?

You know,

but then we kind of react, we're more reactionary on a local level.

You know, I, you know, I mentioned our goal earlier of want to get to 100 million revenue and have less than 150 employees.

That's way easier to achieve when you start from 10 employees than you start from 500 employees.

Right.

And we're also a fully remote business.

And so

we're kind of pushing the envelope on two dimensions of like, how do you use AI to basically streamline communication in an on-in-person org that doesn't ever see each other in person more than once a year?

But I'll tell you that, you know, right now it's still, I think, really exciting times.

I mean, we, for our business, the thing we've been really excited about is not just writing notes for meetings.

That's never been our goal.

Our goal is what happens when we get all of your meetings, all of your team meetings, all of your company's meetings into one data repository.

Because it's a really big data set.

It's really hard to move, you know, historically never been captured, certainly not structured.

But if you get all that into one place, we're finding a place where the modern LLMs can actually do really interesting things.

Like we did an example with Prototype the other day where we said, hey, you know, Yeah, Fathom, tell us what's the history of transcription engines at Fathom?

And it went back to every all-hands, every engineering meeting for four years and And it wrote a six-page article about everything we've ever done.

You think about for knowledge management, right?

Yeah.

Also, seeing where your loopholes are and where your vulnerabilities are.

Say, hey, you know, you've listened to four years of my conversation.

I don't remember what I had for dinner last night, let alone anything else.

So being able to sit there and analyze, okay, where are the holes in our things?

What have we missed that was mission critical?

That's something that, because again, I love picking on Fathom because it shows up and it annoys me all the time.

It says, I want permission.

I'm like, bugger off.

But the ability to do that and then query everything down the road, that data set is infallible.

Right.

Once you get to the point where it's like, you know, everyone hates meetings, but we love having great conversations.

Right.

And I think we're moving towards a world where you can have meetings and just kind of speak things into existence.

We could talk about it.

And we get done with the meeting.

It's done.

The SOW is written.

The email is drafted.

The power, the Gamma PowerPoint is already queued up sort of thing.

Right.

And we get to a world where we get this really interesting dissemination of knowledge across the org in like a fun way.

One of the things we're experimenting with is like, you know, everyone hates sitting in all these meetings where you're like, I didn't need to hear most of this.

How do we start building everyone like a customized podcast that listens to every meeting adjacent to your, like adjacent to your function and gives you kind of like having across the org today?

There's just so many fun things you can do now that you literally couldn't do even six months ago with the LMs we had then.

So I still think, you know, I still wake up every day feeling pretty optimistic.

Yeah, I look inside my window, feel less optimistic, but like, I feel like we'll get there.

You know, humans, humans always solve things at the absolute last possible minute, but we usually solve them.

So

Churchill said it really well.

It says, Americans always do the right thing after they've done everything else.

Exactly.

So that's, that's kind of where we are on this.

And I'm like, oh, God, here we go.

Here we go.

All right.

Just survive and hold your breath long enough.

Go in that one.

How are you dealing with, because a lot of, and this is getting away kind of from the AI, you've created a very successful brand, a very successful company.

It's all remote.

A lot of founders, a lot of owners of companies have problems with that.

Be there, you know, how do we keep my team motivated?

How do I keep them honest?

How do we keep them unified?

How do we build this cohesive culture?

So how have you survived and thrived in that environment?

I think, you know,

one of the reasons why I have this goal around 100 million

with less than 150 employees is I've had a lot of very successful friends that go IPO, get to really big companies.

And all of them say, gosh, when I tell them we're like 80, 90 people, they're like, oh, I missed that.

That was so much fun.

And I always ask them, when did it stop being fun?

And they're like, well, you know, the answers vary, 100, 150, 200, but it's all in that range.

And I hypothesize from talking to them, like, there's some point at which you switch from a high trust environment to a low trust environment.

And, you know, you know,

I picked 150 for our goal because that's like the Dunbar number, which is like this theoretical limit of how many real friends you can have.

And so I kind of think once you get above that number, it's impossible for everyone to be friends in New York and you're almost inherently going to be a low trust environment.

And so I think it's interesting.

I see all the same stuff where it's like, oh, I let my employees work from home and like, they're not really working that hard and da, da, da.

Oh, that's because you have a low trust environment.

And I don't exactly know what creates high trust versus low trust.

I mean, I think it's a cultural thing, right?

I think it's something you could like, you know, I think it's a lot about like maybe how we lead and how we communicate and how we motivate folks.

But I do know you should just be aware of when you have what environment you have.

And you're right.

If you have a low trust environment with your employees, one, maybe you should get curious about like, how did that happen?

And two, yeah, you might then need to get people back in the office because if you can't trust that they're going to work, you know, put in the work, right?

Incentive structures might be evaluated.

But I think we've been very fortunate in that, like, we have an amazing team that loves the work they do.

They're each are given enough autonomy and given trust.

I think high trust environments happen because when we hire people,

I say, tell our team, tell our execs, you should trust by default.

You didn't want to trust them by default, you shouldn't have hired them, but you should trust them by default.

You should give them room to run.

It's kind of like kind of a gamma.

You shouldn't be prescriptive about

the deck needs to look exactly like this.

Is it 80% what you thought it was, but 100% what it needed to be?

Then great.

Yes.

Right.

And I think that's an important factor.

80% of what you thought it was, 100% of what you needed it to be.

And I think when hiring people, one of the best advice I ever heard was: would you trust this person to feed your children?

In other words, if you got in an accident and you couldn't provide for your family, would you trust that these people could do it for you?

And if you can't say yes to that, then you have failed in the hiring process.

So,

I guess my next question is, as you built this high trust environment, which takes time and it takes personalities and there's very specific things, how quick are you on getting rid of someone who does not fit into that environment?

Our goal is usually 90 days.

Like you generally,

you usually know by 60, 45, 60 days.

And then, you know, just out of an abundance of caution, like, I think you can go as long as 90 days.

You really can't go any deeper than that.

But that's our goal, right?

I mean, I think

it's generally pretty clear.

The nice thing is once you have a high trust organism, the organism will reject any organs that don't seem to fit in with that.

And they themselves, like, as long as you've got a good way to have listening posts that are not just, you know, like that's, that's what gets hard, I think, as we get bigger.

It's like, how do we, how do people trust they can tell me, hey, this new executive's brought in is not, like, is not all right, DNA sort of thing.

But the organism knows if you can find a way to observe it.

It's interesting that you do 45 days.

I'm much faster on that.

Yeah, we, you know, we were very quick.

I mean, my grandmother said it really well.

When you're dating someone, you will know within three weeks.

And if you don't know, you know.

And she's just bulletproof with that.

And I miss her greatly.

She's no longer with us.

But when it comes to hiring someone, normally within the first 48 hours, and we don't pull the trigger that quickly, but within the first 48 hours, you've got enough of an icky.

You've got enough of a, okay, this.

I don't know if I want a second date.

This might have not

fun.

Okay.

So I love that you have a big heart and you have high empathy.

So model tough to you and your people.

Well, what I'd say actually, we used to probably have, I would say that number used to be lower.

But then every time we looked at it, we said,

anytime we find out it's not a fit in the first week, that is a real indictment of our hiring process.

A thousand percent.

And so I think now, now we're generally getting to things like, okay, we think our hiring process is pretty good, which means no one should be failing inside of three weeks, four weeks, right?

We shouldn't be able to tell.

It shouldn't be anything that like crazy.

But you can't test for everything in the hiring process, right?

That's where I think like, okay, even with the best hiring process, those issues will show up month in.

That's when, oh, they were all in their best behavior in the hiring process and we got unlucky in our references and stuff like that.

Right.

Yeah, we normally give people tests.

We're like, hey, need you to do this, need you to do that.

And we kind of go through that process, like, hey, do these things.

So we still have people actually test what they need to do.

And so that helps us out with what we're doing.

So as we go through this and as things are changing as an organization and it's for you, as you had a level of success that you never thought you were going to have doing something you never thought you were going to do, what's next?

What's the next big thing that you're like, hey, I really want to accomplish this?

You know, I think one of my superpowers as an entrepreneur is like, I have like these built-in blinders sort of thing, right?

Where

I get really, I get so passionate about what I'm working on.

I actually think like one of my superpowers is just getting passionate about things.

I can get, I would say like,

I always like to hire passionate people, like, because passionate people get passionate about anything.

They get passionate about plumbing.

They go back to our like,

transition your career into, you know, I think if you told me, like, hey, Rich, go be on plumber, I would get so excited about fittings and the right amounts

and stuff like that.

And I think right now it's like, there's just so much that like, it's the most fun time to build.

It's the most

volatile time to build, but it's also the most fun time to build.

I do, on a personal level, get really passionate about,

what I see happening in public discourse and what I'll hesitate to call politics.

I will admit, I met with another entrepreneur yesterday who told me he was running for city council.

And I think he expected me to be disappointed, like be kind of confused by that.

No, I think that's amazing.

I was like, that's amazing.

I was like, politics, not enough, I think, people of high character, good judgment go into politics because they judge it to be EV negative.

And it is EV negative.

That's not why you do it, right?

You do it after you've gotten so much from the society that you feel like you should give back.

And

I think there's a lot of stuff that I would love to do in that sphere in the future.

Because

I think our country could use some help.

I think it could use some high judgment people that are not out for themselves.

A thousand percent.

It's interesting because it's a similar conversation I had over the weekend.

We're talking about, hey, we've all been very blessed.

We've all been very successful.

Maybe it's time to get back and offset and maybe course correct some of the things that are going on that have been going on, not just for this administration, but for many, many,

many,

many administrations.

We're going back double digit.

You know, it's like, oh my gosh, we have to pivot this and it's time to kind of have these people take over and do something different.

So other than you're running for president in the next 27 minutes, if someone wants to track you down and they want to learn more about you and they want to connect, because I'm just super grateful that you shared this stuff, what's the best way?

How do they get a hold of you?

How do they get a hold of Fathom?

What's the best idea?

Yeah, check out Fathom, Fathom.ai.

It's free to use.

Please give it a shout.

And then you can find me on the only social media that I use, which is LinkedIn.

So find me on the staggiest of the social media is LinkedIn.

And I see if you're there, I will do it.

Gotcha.

I really appreciate you coming on.

Thank you so very much.

Charles, this is awesome.

Thanks for having me.

Absolutely.

All right, guys.

That wraps up our episode with Richard.

I want to thank him for going out and sharing some insights and where things are going and the unforgiving truth of what's next with AI, how it has two very specific paths.

And it's in our ability to dictate where that goes.

All right, guys.

I'll see see you in the next one.