The Impact of AI, from Business Models to Cybersecurity, with Palo Alto Networks CEO Nikesh Arora

58m
Between the future of search, the biggest threats in cybersecurity, and the jobs and platforms of tomorrow, Nikesh Arora sees one common thread connecting and transforming them all—AI. Sarah Guo and Elad Gil sit down with Nikesh Arora, CEO of cybersecurity giant Palo Alto Networks and former Chief Business Officer of Google, to talk about a wide array of topics from agentic AI to leadership. Nikesh dives into the future of search, the disruptive potential of AI agents for existing business models, and how AI has both compressed the timeline for cyberattacks as well as fundamentally shifted defense strategies in cybersecurity. Plus, Nikesh shares his leadership philosophy, and why he’s so optimistic about AI.
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @nikesharora | @PaloAltoNtwks
Chapters:
00:00 – Nikesh Arora Introduction
00:39 – Nikesh on the Future of Search
04:46 – Shifting to an Agentic Model of Search
08:12 – AI-as-a-Service
16:55 – State of Enterprise Adoption
20:15 – Gen AI and Cybersecurity
27:35 – New Problems in Cybersecurity in the AI Age
29:53 – Deepfakes, Spearfishing, and Other Attacks
32:56 – Expanding Products at Palo Alto
35:49 – AI Agents and Human Replaceability
44:28 – Nikesh’s Thoughts on Growth at Scale
46:52 – Nikesh’s Leadership Tips
51:14 – Nikesh on Ambition
54:18 – Nikesh’s Thoughts on AI
58:21 – Conclusion

Press play and read along

Runtime: 58m

Transcript

Speaker 2 Hi listeners, welcome back to No Priors. Today we're here with Nikesh Aurora, the CEO of Palo Alto Networks.

Speaker 2 He joined Palo Alto in 2018 when it was the next-gen firewall player and has since grown it to six to seven times the size as a leader as a platform security company.

Speaker 2 Previously, he was the SVP and CBO of Google during its massive growth phase from 2004 to 2014. Welcome Nikesh.
Nikesh, thanks so much for being with us.

Speaker 1 My pleasure.

Speaker 2 I don't know where to start because I want to talk about AI. I want to talk about security.
I want to talk about leadership.

Speaker 2 I do think given your history growing Google as chief business officer, like we have to ask you, what do you think is the future of search and how threatened is it?

Speaker 1 Nothing like a slow little low ball. Welcome me to your show.
Gotta work on a little bit. And this guy is in Google too at that point, I warrant you.

Speaker 2 But I talked to him too much.

Speaker 1 What does he think? I think we should defer the question to you as the expert. Oh, look at that.
He doesn't know how to put his own mouth.

Speaker 1 We do all the hard work.

Speaker 1 I think

Speaker 1 the idea when search came about, I still remember going out there and trying to sell search to people.

Speaker 1 And it was the, oh my God, you mean I can just go with the internet, type something and I can get the answer.

Speaker 1 And we spent two decades trying to get all the information out there on the internet so it was easily accessible to people. And I think you saw the benefits.

Speaker 1 You saw the benefits of democratization of information. Farmers in India could get stuff and people get information.
I think now we're in an age, people are saying, great,

Speaker 1 don't give me all this stuff to sift through myself.

Speaker 1 Try and make sense of all of it for me because it's too much. And that's what you're seeing in today's generative AI models.
So

Speaker 1 I sort of, in my own words, I call that democratization of intelligence. All of us will have the basic intelligence, which every other person next to us has because we can kind of go figure it out.

Speaker 1 I don't have to hire the same people to solve the same problem for me the 10,000th time and pay the money because it's already been solved 9,999 times and the outcomes on the internet somewhere.

Speaker 1 So I think to the extent that Google has sharpened its sort of

Speaker 1 skills on putting all that information together, being able to synthesize it, understand it, being able to interpret my intention as an end user and try and present me the most likely outcome.

Speaker 1 I think that should translate well to the notion of generative AI being able to summarize the same things in a much more enhanced order way for them.

Speaker 1 So I think from that perspective, will they have the ability to transition the current search product into a future product, which is basically, you know, call it what you want to call it, you know, ask me anything.

Speaker 1 It's funny, like, you know, when you worked at Google 15 years ago, Larry had that vision.

Speaker 1 He used to talk about getting to a point where you answer my question, answer my intent, as opposed to answer what I type. So

Speaker 1 he had the foresight to talk about that. He used to talk about AI.
So I think from a product perspective,

Speaker 1 they are in a good position to be able to transition the product to what the end users need.

Speaker 1 And you've seen that with Gemini, you see that with ChatGPT, you see that with other models, which are getting the same place. Let's not underestimate the distribution power.

Speaker 1 There are two or three companies in the world which have distribution of the billions, and whether it's Facebook with all their properties, whether it's Apple with their properties, Google, the properties.

Speaker 1 So they have the distribution. They have the product chops.
They have the AI chops.

Speaker 1 I think the question, bigger question is, how does the business model transform from what it has been with 10 blue links and ads being represented against them?

Speaker 1 And what's the new monetization sort of that will come as a result

Speaker 1 likely to be agents or something else in terms of starting to take action yeah we can we can come there too but i think because he has a search question search is an advertising revenue question so yes the question is how does the advertising revenue morph into some version of consumption or transaction metric which because that's what it's kind of that's the intent when i go say what are the best blue pants in the world it's not i'm just not doing it for academic interest i'm actually going to transact so yes maybe an agent could do this or you know the fastest flight to get to rome so the agent could could do that.

Speaker 1 And we'll talk about agents in a second. I think that's much more disruptive than generative AI.
And to that extent, I think how the transition, the business model is going to be interesting.

Speaker 1 I don't think anyone knows what the business model is. But all I can say is that having been there and admired what they do, they do spend time getting the product adopted first.

Speaker 1 And eventually, when there is tremendous amounts of distribution for the product, you find a model that emerges.

Speaker 1 I remember how for the longest of years, nobody quite knew how YouTube was going to make money. And everybody was looking at Netflix as the one who were making money in streaming and YouTube wasn't.

Speaker 1 I think YouTube is a big-ass business now compared to most of the streaming players in the world. So I think they'll figure out how that model transforms.

Speaker 1 I do think the Agentic challenge is a much bigger challenge than the generative AI challenge. What sort of challenge do you think that is, or how do you think that's going to substantiate?

Speaker 1 Look, the idea that

Speaker 1 if you step back, we spent 30 or 40 years building products where we focused on UI, right? Product managers that effectively glorified UI manager.

Speaker 1 So we're trying to make sure that us common folks can interact with data and some sort of engineering algorithm behind it because we're not smart enough to talk to the engineering algorithm ourselves.

Speaker 1 So you go to Expedia, I say a bunch of boxes we're supposed to fill in as humans, and it gives the answer to us because without having us to write any search query or SQL, XQL, whatever you want, to go find the information.

Speaker 1 Now, I think generative AI has made that easier. We can talk to the UI in natural language to some degree.
It can generate outcomes on the fly.

Speaker 1 So I think to that extent, I think all of product developers are going to change with generative AI and this natural language capability.

Speaker 1 Now, if you take that next step further and say, I actually don't need to come interact with the UI, my agent can go do the task for you. Now step back and think of that.

Speaker 1 Like, you know, 50% of the applications in the world are some sort of transaction fulfillment applications.

Speaker 1 If there are five, I'm picking a number, I think it's more than 5 million, but if there are 5 million apps on the iPhone or the Android phone, half of them are trying to get you and I to interact with the engineering algorithm and give them data.

Speaker 1 If all those become agentic actions of an Uber agent, then the question becomes: who sits apart? Yeah. Who's the Uber?

Speaker 1 And that's also a lot of Google's traditional revenue, at least on the advertising side, it's direct response ads. It's not the branding ads.
So it's part of that business model in some sense.

Speaker 1 Yes, it is.

Speaker 1 Most direct response is lead gen, right? In the marketing speak, it's lead gen, which eventually results in a transaction or fulfillment of information request.

Speaker 1 So at some level, it is a precursor to a transaction. People pay a lot more for the conservated transaction than for the lead.

Speaker 1 So maybe the business model transition is stop giving me leads, give me conservated transactions through agents.

Speaker 1 Maybe I'll get paid more to buy you the airline ticket directly than have you be able to find an airline ticket provider, which is advertising versus transactions.

Speaker 1 I think the opportunity is there from a business model perspective, but I think before we go back to that stable world where these business models have transformed, we're going to go through a very disruptive phase when a lot of these apps will be rewritten.

Speaker 1 And some of the

Speaker 1 apps we'll have to question: are they direct consumer apps or are they APIs that are perhaps MCP client server interaction?

Speaker 1 We don't call them APIs anymore, which will actually consummate that transaction. Who do you think is most vulnerable? Wow, you guys don't ask like simple questions.

Speaker 1 You guys go for the jugular on every one of them.

Speaker 1 That's why you're such a good investor. Yeah,

Speaker 1 we're looking forward to your insights on this stuff. So, no, I know.
Look, I think the most vulnerable people are where there is poor loyalty to the UI.

Speaker 1 If it's just a sort of thin front end to effectively a transaction processing system in the back, do you care who you get your ticket to India? My wife's just coming back.

Speaker 1 Do you care who get your ticket to India from, which airline, or which

Speaker 1 travel entity, or which travel UI, perhaps? You don't. You just want a ticket that allows you to get on a plane and get the other side.
Do you care who books your reservation for a restaurant?

Speaker 1 So, at some level, where you're at the end, you're effectively a transaction processing interface, you build some degree of brand loyalty through a whole bunch of sort of non-product experiences, I think those become vulnerable.

Speaker 1 There are

Speaker 2 like several more controversial ideas that OpenAI is trying to prove out. One now is the scalability of consumer subscription.
And another, I think, is... How does that work?

Speaker 2 Well, it's just like, can you, I think it's actually quite surprising how many people are paying subs for this intelligence today.

Speaker 1 Yes. Yes.

Speaker 2 I think the other is actually, and I want to talk about the B2B side, is that you should get paid for thinking harder and solving harder tasks in like a scalable way. This is what they want.

Speaker 1 They want to sell like work, right, to businesses.

Speaker 1 What do you mean by work to businesses?

Speaker 2 I think there's a view that the traditional way that you sell most enterprise software or products is like, it's a seat unit, or it's some sort of like volume unit like an appliance or something or a coverage.

Speaker 2 Yes.

Speaker 1 Throughput-based, yes.

Speaker 2 And here, throughput traffic, et cetera.

Speaker 2 Here, the view would be like, well, if I solve a really hard problem for you, I have a unit of work. It's essentially translating to a unit of compute or sort of charging for value.

Speaker 2 And so I think there's a strong belief in some labs that they should be able to charge for that. How do you react to either of those business model ideas?

Speaker 2 Because you're now at Paul also in the business of selling direct value versus ads.

Speaker 1 Yeah. Okay.
So are we pivoting from consumer to business now?

Speaker 1 Yeah, yeah, because we go away from the subscription because we went off on the whole subscription idea of the people to pay for subscription. Look, I think that is a bigger leap.

Speaker 1 The bigger leap is in the consumer world, we are much more tolerant of

Speaker 1 inaccurate answers sometimes or not perfect answers. How many times you go to a search, even today, where you're looking for something, you don't find the right answer.

Speaker 1 And you say, well, let me look again. Oh, I must have asked the question wrong.
Let me ask the question.

Speaker 1 You probably do that in your prompts and in sort of chat gpt or gemini or pick your favorite you know llm but in the enterprise world there is not that tolerance for an inaccurate outcome especially if you get in the agentic world say oops sorry i made a mistake i meant you to turn off that server not this one oh you know i blew up a whole bunch of my enterprise because you made the wrong give me the wrong answer dear you know friend LLM.

Speaker 1 So

Speaker 1 I don't think we're there yet, to be fair.

Speaker 1 None of us are giving autonomy to any form of LLMs to create any agentic task or do any work for me. We're all using them with humans in the loop for suggestions.

Speaker 1 And we're still sort of using the use cases where we are okay with multiple answers, where we're actually having the humans be, it's almost like a glorified assistant or better assistant or somebody who's sort of a knowledge worker who's summarizing a whole bunch of information that I may not be able to have.

Speaker 1 from the outside. I don't think we're getting into precision tasks and we're getting into precision actions yet.

Speaker 1 So look, if you can generate precision precision tasks with accuracy, precision actions with accuracy, yeah, sure, maybe you can charge me for a unit of work.

Speaker 1 But I almost feel in the enterprise, we're going to go back to some version of redefined, for lack of a better word, let's call them AI as a service instead of SAS.

Speaker 1 Perhaps AIS.

Speaker 1 Right. Sounds not as a rich term.
Yeah, yeah. SAS wasn't a rich term either, but sure.
AIS sounds even worse.

Speaker 1 But anyway, it's like, well, imagine that, because you literally have to design the enterprise's workflow end to end and see how I can do that from an AI-first perspective, right?

Speaker 1 Perhaps like the likes of cursor or vibe coding apps today. They're trying to at least look at a part of the developer's workflow and say, here's what you do.

Speaker 1 Here's a bunch of tools that can help you through that journey, dear human.

Speaker 1 And I'm going to see based on how you use it over time, I'm going to get smarter and smarter and be able to take over more and more of what you need to get done. So I think it's kind of like

Speaker 1 these are. AI apps under training.

Speaker 1 When they grow up, they're going to take over more and more of our tasks and allow the repetitive tasks to to go away so you can apply yourself to new, unique problems.

Speaker 1 But I think there's a long time coming.

Speaker 1 If you look at a lot of the platform shifts that have happened in the past, so for example, with Microsoft OS, they eventually bundled on top the Office Suite, right?

Speaker 1 Those were applications that were running on top. They ended up rebuilding or buying them, bundling, and cross-selling.

Speaker 1 If you look at Google as a platform, it forward integrated into the biggest areas of vertical search. Yes.

Speaker 1 Travel and local and a variety of other areas.

Speaker 1 But they're doing that in workspace too, right? You guys begin to see Gemini show up. Doing a workspace.
Gemini show up and

Speaker 1 yeah, yeah, exactly. So actually, coding is a great example, right? OpenAI tried to buy WinSurf.
Anthropic has Claude. People are trying to afford integrate there.

Speaker 1 Cloud also or Anthropic also mentioned they want to forward integrate into financial tooling right now.

Speaker 1 Do you view this world as basically the platform the big foundation lab companies are likely to try and move into the biggest business verticals directly over time? That's an interesting debate.

Speaker 1 So let's go back and see why.

Speaker 1 Look, the reason you're seeing that developers is the first protocol because the more they're the most likely to experiment and to be able to work with half-baked outcomes.

Speaker 1 Like you give me 75% of the code that I need, I can parse through it and figure out the meaning 25% because that's what I do for a living.

Speaker 1 It's much harder to do it in other professions where we're not fully sort of in tune with all the guts of what we're doing. So I think that's an interesting place to start.

Speaker 1 And over time, that workflow will go out into testing and other parts of the software developer lifecycle. So I think that's kind of interesting.

Speaker 1 I've had this conversation specifically with some of the people who are driving some of the larger LM businesses.

Speaker 1 And, you know, I remember the very first phone call I made to Thomas, and we talked about small models and large models.

Speaker 1 And he gave me good advice, like, don't, don't, don't chase a cybersecurity model or don't chase a small model because these large models become so much smarter that the smaller models will not be able to be as smart as these things.

Speaker 1 So I do believe. Have a sage.
Yeah. Yeah.
Yeah. That was very good.

Speaker 1 And then so we decided not to, no, there was a member, like the very early days, one of our competitors announced, say, they're going to work on a cybersecurity LLM.

Speaker 1 Now, if I go back and look, I'm pretty sure that sort of that was a great announcement. I don't think anything's come out of it.
So we decided not to chase that because it didn't seem to make sense.

Speaker 1 That was pretty sage.

Speaker 1 I do believe that after a point in time, and I think you and I talked about this right before doing this, you were giving me great insight that the models are converging, their reasoning capabilities are converging.

Speaker 1 So if you believe all these models are going to be extremely smart, but somewhat similar in capability.

Speaker 1 The question I and I always say this to people, like, look, in the enterprise world, you know, getting the smartest model in the world is like hiring the smartest PhD from the best university you can find in the world.

Speaker 1 For that PhD to be useful at Palo Alto, we still have to teach them our ways, right? Because they're not going to be useful and they walk in the door. They have to understand our context.

Speaker 1 They have to understand how we do things. They have to understand how to take our problem and solve the problem.
And whether you call that, you know, pharmaceuticals, you call that genetics.

Speaker 1 You have to make the model get smart about genetics, very smart model, but you have to give a lot more training data in a particular domain for it to become really smart in that domain.

Speaker 1 So I think the interesting opportunity will be how can we take these models and apply them to domains where we have proprietary data. Now, developer use case is a

Speaker 1 genetic use case. There's enough code out there in the public domain that you can actually get smart, 90% smart encoding.

Speaker 1 There's not enough genetic data out there in public domain or pharmaceutical drug discovery data out there in public domain or cybersecurity proprietary data out of the public domain.

Speaker 1 So the question becomes, how do we take these models? How do we take that brain, apply that to a domain, make it really smart?

Speaker 1 I think the challenge right now is is that if you're building a wrapper effectively as an AI, as a service company, and all your wrapper does is enhance the capabilities of a model and put some guardrails around it, then your biggest risk is the model slowly expands into those capabilities and you're no longer in business.

Speaker 1 Now, the difference that those companies are going to have from others who might survive. are if you look at every SaaS company, it's actually the packaging of a workflow, the system of record,

Speaker 1 right?

Speaker 1 My HR system is a connected workflow. Every employee knows how to use that workflow.
And then every time it locks down a certain table, say, this is your system of record.

Speaker 1 This is what Vikesh gets paid. This is when he took holidays.
This is what his equity looks like. This is when he invests.
It's less about the app. It's more about that data and the system of record.

Speaker 1 Eventually, if these apps have to live in the long term, they have to marry the capabilities of AI, which effectively becomes an enterprise system of record, right?

Speaker 1 A model is not going to become my system of record. It'll still be some proprietary logged database somewhere or some data tables pick your pick.

Speaker 1 And maybe the interaction mechanism is no longer a workflow, it's some sort of agent or some sort of AI interface, which allows that system of record to be maintained and created.

Speaker 1 But there still has to be some rules.

Speaker 1 So I think that's kind of where the opportunity is as compared to just putting a wrapper under a generic process saying, Today, I'm going to help you analyze legal contracts. Well, guess what?

Speaker 1 You know, I ran my legal contracts with ChatGPT. It works just fine.

Speaker 2 Where are we actually given your visibility into enterprises and actual adoption or value and use cases?

Speaker 1 So I think the use cases where

Speaker 1 there are two current major use cases, right? One, let's call it, we call it generalized or perhaps cross-enterprise consistent activities, right? Generalized. So do I have a legal team?

Speaker 1 Yes, I do have legal terms. Does every enterprise have a legal team? Yes.
Do they have any particular proprietary knowledge compared to

Speaker 1 particular Palo Alto? Unlikely. It's more, I need them for legal advice, not for Palo Alto advice.
So in that use case, yes. Could we use a, you know, Harvey equivalent or whatever those are? Sure.

Speaker 1 It enhances their productivity. They get their 50s faster.
Could I possibly in the future use some sort of AI-based interpretative app or interpreted

Speaker 1 application, which helps me process my accounts to see if accounts payable faster, codify them? Sure. So I could.

Speaker 1 So there's a whole bunch of repetitive generic tasks across enterprise, which I'm pretty sure could be done by some version of an AI wrapper around LLM with some particular context or my data. Sure.

Speaker 1 So to that extent, I think we're all experimenting with those things. But my caution to my team is don't try and build them.
Somebody's going to build them for all of us.

Speaker 1 It'd be much cheaper to rent them by some perhaps metric or

Speaker 1 metric or work or per seat, maybe agentic seat. I don't know, but there'll be some mechanism that they'll charge us on.

Speaker 1 But we don't have to build it because it's going to cost me a lot more to build my own, you know, accounts payable, accounts payable, smart AI system compared to what I can buy off the ship.

Speaker 1 Is that what your customers believe now? Like all the enterprises, the largest ones? I think many of them do because this is not an easy problem to solve. Yeah.

Speaker 1 Like, first of all, finding the skill set, finding people who understand this, you know, living in this world of constantly evolving models, where if you keep in, by the way, you know, this better than me, like there's no, you can't one model out and take the next version and stick it in.

Speaker 1 It works just the same way. This is like getting a new PhD and training all over again, saying, Let me explain how we work here.

Speaker 1 So, from that perspective, I think most rational players in the enterprise space, as in customers, would want somebody who's the expert to build it and for us to have some version of adaptability or adaptation to it and make sure it's secure.

Speaker 1 Like none of us, no enterprise customer wants their data to be floating in the multi-tenant environment, saying, Oh my God, my data is training other people's data.

Speaker 1 Now, to the extent it's accounts payable, have a good time, right? You know, you can understand how Ecodify stuff have a good time.

Speaker 1 But to the extent it's proprietary data, when I'm doing FDA trials, I don't want my FDA trial data training somebody else's data.

Speaker 1 But I think they will earn the side of caution and say, I want my instance to be secured.

Speaker 1 So I think we spend half our time before we look at any of these packaged AIS apps talking to them, understanding the security.

Speaker 1 I don't want my source code to be training somebody else's coding

Speaker 1 app. So to that extent, there's a whole bunch of conversation.
Is it ring best? Is my data mine? Are you using it to train your model? Are you using it to train your system now?

Speaker 1 And then we spend a lot of time testing it to make sure that they're not doing it. So a lot of time and effort is spent there.
Not every enterprise is as discerning.

Speaker 1 We have to to be because we're in the security business.

Speaker 1 But I think you'll see some version of stability and acceptance there that people will take these generic sort of AI as a service system or record.

Speaker 1 Sorry, AI as a service apps, which are helping humans get better.

Speaker 1 How do you think about that in the context of applications that you think make the most sense for cybersecurity? So if I look at founder activity, there's more and more activity around SOC.

Speaker 1 There's a lot of activity around pen testing.

Speaker 1 There's activity around a lot of areas that are very human intensive, in some cases, repetitive tasks, which make a lot of sense for this form of generative AI to take over.

Speaker 1 And then there's people incorporating AI into existing products like Socket for sort of like a SNCC-like competitor or other aspects of code security.

Speaker 1 I'm just sort of curious from your vantage point, what do you think are the most interesting areas of cyber AI?

Speaker 1 So I think if you step back and think about cybersecurity, right? There's a world of cybersecurity which operates and says, this is the known bad. I found it.
Let me stop it. Sure.

Speaker 1 That's a good thing. I found a bad actor.
Let me stop it. I found malware.
Let me stop it.

Speaker 1 Now, to be able to stop

Speaker 1 bad things from that are getting into your network, you have to be deployed at every sensor. So the first thing a cybersecurity company says, look, I can't stop what I don't see.

Speaker 1 So I have to be present at every edge, every endpoint, every sensor of your organization. So five years ago, we made a conscious choice.

Speaker 1 Our strategy should be to get to be in as many sensor places or control points as we can.

Speaker 1 So we did that. You know, we have a SASI product, we have an endpoint product.
So that's good.

Speaker 1 I think sensor business will have to stay because if you don't, if you're not there, you can't find anything. It doesn't matter AI or an AI.
I got to be able to be there to find it.

Speaker 1 And then sensors are pretty good at stopping the known bad. If it's a known bad, I stop it.
Well, most cybersecurity breaches happen because of the unknown bad, because we stop all the known bads.

Speaker 1 And every time you build a new company, it's saying, well, let me go.

Speaker 1 So everything you're talking about is trying to find the unknown bad or a vulnerability which has left the door open so bad guys can get it. It's a socket or or a sneak-like competitor.

Speaker 1 That's what you're trying to do. Now, in cybersecurity, there are companies which are the sensor.
Now, the sensor allows you to do two things.

Speaker 1 One, stop the non-bad, but also collect valuable data to analyze the data to understand what the non-bad may be. So we get benefit from both.

Speaker 1 We're at the sensor stopping the non-bad, and we also collect a lot of data. Traditionally, in cybersecurity, people have been, I'll call it, feature integrated from end to end.

Speaker 1 Oh, let me sit at the end point. I'm going to trap all the data around a particular topic

Speaker 1 i'm going to take that data into the cloud i'm going to analyze the data and then tell you oh my god i found something suspicious here are five suspicious things you should investigate but the problem is because i don't have context when the data passed my sensor it's gone

Speaker 1 so take a simple example i send you an email a phishing link where you click and you know when you go to a website i steal your credentials now I'm an email security company. So I stopped.

Speaker 1 I had the bed. I see a bad email.
I know how to stop it. I don't know it's a bad email.

Speaker 1 And then ela clicks on it now you've gone from an email product somewhere else in the company gone on the internet through a firewall i have no idea what you did so all i can say is that link looks suspicious i clicked on it maybe you want to investigate now no customer wants a list of 5 000 things they have to investigate so people are busy building these agents saying well let me build an agent to help you investigate it investigate that but i think The better answer is if I had all the context of the enterprise, I can go mine that data to find what actually happened.

Speaker 1 So we're taking the different pack. We're saying, let's see if we can consolidate all the data in the enterprise.

Speaker 1 If we can, we can run a whole bunch of machine learning algorithms and do all these activities on top of that.

Speaker 1 The current startups you mentioned are like startups that are trying to do what AI wrappers are trying to do on LNMs.

Speaker 1 And over time, we're going to get better and better, where we're going to squeeze their capability over time that you wouldn't need.

Speaker 1 Yeah, yeah. And there's a few different sets of them because there's also the ones that are just trying to automate human services that are associated with the security world today.

Speaker 1 And so, pen testing would be one example where you often hire external consultants to help with that.

Speaker 1 SOC may be another one where effectively you have this, you know, you're basically looking at the operating center across all the different events that are happening and trying to automate that.

Speaker 1 So, it's a little bit more of a change. The reason you're trying to automate the SOC from the outside in is because you're not running 10,000 machines

Speaker 1 at the ingestion point. Why should I first collect all data and analyze it after?

Speaker 1 I should analyze the data and cross-correlate at ingestion to make sure I understand the bad things and I put them up there and I then say, I am going to, and then build agents to investigate the bad things.

Speaker 1 Yeah, yeah.

Speaker 2 Well, this is, this is my, my question of no, the environment in a large enterprise today is still very fragmented, right?

Speaker 2 They're just doing this, you know, pass-the-hat trick that you're describing with all these different tools. Hopefully they're very dominated by yours.
Yes. But as a lot of you mentioned.

Speaker 1 Chosen by customers. Yeah.
Dominant is a bad word.

Speaker 1 You work once in a large company on a chosen.

Speaker 2 Gently chosen in a highly competitive environment. That's right.

Speaker 1 Yes. Of course.

Speaker 2 But,

Speaker 2 but, like, you know, I was talking to a friend who runs security at a large financial, and he's like, we have 100.

Speaker 1 So he wants some cybersecurity.

Speaker 2 He's definitely got some power if he wants more.

Speaker 2 But he's got 118 identity tools, right?

Speaker 1 Wow. I didn't think there were 188.
There's an identity crisis.

Speaker 1 There it is.

Speaker 1 There we go. Got to try.
So he bought an identity company to solve that.

Speaker 1 You're all right.

Speaker 2 Maybe the answer is just make it all cyber arc. But I think his view is like, it's like just untenable for me to consolidate all of this in the near term.

Speaker 2 And someone attached something after, which does the human consolidation. Yes.
Like the Slook automation thing, right?

Speaker 1 The interesting part is we're the youngest industry in technology because cybersecurity did not exist before connectivity came by.

Speaker 1 Nobody had to, like, you know, you were in a mainframe, you sat in the office, you connected the mainframe to a pipe into the backend, and you're off to the races.

Speaker 1 So nobody actually was going to come and intercept your traffic. It's not carrying before the web.
That's right. So now the web and applications out there made it serious.
It's a 25-year-old industry.

Speaker 1 So every time a bad problem showed up, somebody built a point solution to solve that problem.

Speaker 1 So it is a fragmented architecture. It's a fragmented environment.
There's a lot of data being used and analyzed multiple times by multiple vendors because they were needed for that point in time.

Speaker 1 But like any industry, that capability commoditizes over time. Like there's no genius in building a firewall.
Every firewall within plus or minus 10% does the same thing, right?

Speaker 1 Some things do it better. The question is, what do you add onto it? What capabilities can you build beyond that?

Speaker 1 So if you believe that, sure, in a cash, 80% of the capabilities are going to be capabilities that even most companies still don't have, and they can be delivered in one platform.

Speaker 1 On the other 20%, we can put add-ons, but over time, you know, we will eventually come and gobble those up because there'll be a new 20% though.

Speaker 1 Until one year ago, nobody was talking about AI security, right?

Speaker 1 Now there are probably more companies than you can count who all show up at Ela's house and say, please give me, and you said, please give me some money. I'm starting a cybersecurity company for AI.

Speaker 1 I do prompt injection. I do malware checking on models.
I do model data poisoning, agentic attacks. None of this thing existed.

Speaker 1 So of course, you'll find people building features to protect against those because we were too busy still fixing the problems the last five years where the mass market problems are.

Speaker 1 So I think over time, what you'll see is the platform approach is going to sort of eat into this feature approach that's happening. And

Speaker 1 there are companies who cobalt code out there as well. So 118 ID vendors sounds like a lot.
I didn't know there was 118 vendors, to be honest.

Speaker 1 He might have meant cybersecurity vendors, which I can believe, which is also a lot.

Speaker 2 What new problems from AI do you actually pay attention to? You say, like, this is going to be a mass market problem.

Speaker 1 Look, I think

Speaker 1 if you

Speaker 1 back

Speaker 1 up

Speaker 1 and read our own dog, we'd believe our own rhetoric.

Speaker 1 If you believe your own rhetoric, then I, as a bad actor, should be able to unleash agents against the enterprise, against every aspect of it, and figure out where the breachable parts are or where the holes are in a quick, in a matter of minutes or

Speaker 1 less than an hour.

Speaker 1 And I should be able to point my attack towards that vector. I should be able to run simulations on how should I attack this thing.
And I should be getting an exfiltrated data. Now,

Speaker 1 when I started seven years ago, the average time to identify a target, get through it, and exfiltrate data was in the three to four day timeframe.

Speaker 1 The fastest we see

Speaker 1 right now is 23 minutes. So if the bad actor can get in an hour and exfiltrate data or shut down your endpoints with ransomware in under an hour,

Speaker 1 then by physics, your response time has to be less than an hour. The average response time is still in days.

Speaker 1 So from that perspective, the biggest threat that AI brings is that it continues to compress the timelines to be able to come, you know, either shut down your business, cause a compromise, cause ransomware, cause economic disruption.

Speaker 1 If that's what it is, I think the pressure just went up higher on our customers to get their

Speaker 1 infrastructure in order. So

Speaker 1 that's the risk and the opportunity.

Speaker 2 Yeah, I think a lot mentioned pen testing, which hasn't traditionally been like a very strategic part of the security landscape.

Speaker 1 Yeah, pen testing is just to knock.

Speaker 1 down on every part of your defense. I think half the companies don't do pen testing because they're scared of what they'll find.

Speaker 2 Yeah, they're doing some minimum compliance level. But I think from a technology perspective, to your point, like what is pen testing is trying to attack the area.

Speaker 2 There are companies like Run Sybil now that do this. They can do it continuously in the 23 minutes you described.

Speaker 1 And I'm like, that's exactly what an attack is. We run seven by 24 by 365 at Palo Alto.
We don't have, we don't hire third-party people.

Speaker 1 So what you're talking about as a company, we run that as a default because that's our existence. We get compromised, we get breached, we have a problem.

Speaker 2 Yeah. You mentioned email.
I think it's like a well-known issue that a lot of the breaches, they happen because of social engineering, because of email, because, you know.

Speaker 1 Credential take or 89% of the attacks happen because of credential theft. Right.
Somebody becomes you or me.

Speaker 2 So people like us are not getting any smarter. And now you have models.
Come on.

Speaker 1 Ah, you try

Speaker 2 1% a day, right?

Speaker 1 Atomic capital.

Speaker 2 But

Speaker 2 now, how concerned are you about like deep fakes and generated spear phishing and voice attacks and all that stuff?

Speaker 1 So to the extent they enable the act of social engineering, yes, those are concerning because I think most forms of two-factor authentication are going to be out of the window.

Speaker 1 I still won't say which bank it is going to go and call it, but I call them and they say, Oh, can you please confirm your identity?

Speaker 1 And they ask me three arcane questions, which I'm pretty sure ChatGPT or Gemini will be able to answer in sub-seconds because they're only scouring the web to find public information about me and ask me questions.

Speaker 1 So, I think all those forms of authenticating who you are are getting in easier and easier to compromise. So the question becomes, so let's, so the problem you have to figure out is,

Speaker 1 you can solve it their way or our way, as in the way they're looking at it or the way we're looking at it.

Speaker 1 At the end of the day, every one of these social engineering attacks, credential takeovers eventually initiates some bad activity in the enterprise.

Speaker 1 And the bad activity in the enterprise often takes the form, takes on the form of what I will call anomalous behavior.

Speaker 1 Right. Suddenly, Sarah decided to exfiltrate all the data in Ela's company, even though she used to do email with him every day.

Speaker 1 Today, suddenly she's logged in and she's downloading everything onto her laptop. This actually happened last week.
Thank you.

Speaker 1 Great deal flow. Is there nothing suspicious in that? Yeah.
That sounds pretty suspicious.

Speaker 1 So I can spend my time trying to make sure that nobody can take over Sarah's identity, but I can make sure that if

Speaker 1 Sarah doesn't act

Speaker 1 anomalously, and if she does, then I throw in a block at that point in time.

Speaker 1 So I'm looking at life the different way. That's why we're buying this identity company because identity companies today say, oh, I checked you in the door.
You're fine.

Speaker 1 And now you can roam anywhere in my entire enterprise and do whatever you want because you were checked in with the door.

Speaker 1 That doesn't work anymore. I have to make sure that I, now today, compute and data and AI is going to allow us to analyze all the anomalous patterns.
And if I could say that, that's pretty weird.

Speaker 1 She wouldn't, she never done this in the last seven years. Why is she doing this now? Does she have the rights to do it? And I could do it just in time, right?

Speaker 1 I can stop you from accessing this stuff so you have to change the name of the game you can't do it the same way so the whole idea of us buying an identity company is to think about life saying stop giving people persistent rights give them just-in-time rights give them rights which are analyzing for anomalous behavior in fact what i want to do is i'm like i actually don't need you to give me a second factor authentication i can see the way you type

Speaker 1 And if your identity starts typing differently, I'll block you.

Speaker 1 So one of the things we've done incredibly well at Palo Alto over time is really starting starting with a core set of products and then expanding and really providing both the platform and the add-ons that you mentioned.

Speaker 1 Was that something that you came into the company knowing that you wanted to do? Is that something that you ended up adopting over time? A little bit curious how you thought about that puzzle.

Speaker 1 Well, Eliot, you and I have worked together before, and I remember, you know, you had Google, a smart young man, and that hasn't changed.

Speaker 1 I came to Palo Alto and I just analyzed the business problem, you know, the product. So, you know, I came with two things.
One, I understand business.

Speaker 1 Two, Larry told me many years ago that if a technology company loses sight of the product, it decimates over time. And that's been true across technology.
You can take your pick across the

Speaker 1 tons of companies which haven't made it. The business problem in enterprise is eventually, if you look at an enterprise company with less than a billion dollars in revenue,

Speaker 1 50 to 65% of the cost is cost of sales, marketing, and customer support,

Speaker 1 which leaves no room for margin. If you look at the largest enterprise companies, that number goes to 30%.

Speaker 1 So actually, it's all about taking that 70%, bringing it down to 30, because R ⁇ D, GNA don't change a lot past a billion to 10 billion or 100 billion. You still maintain 12 to 16% of R ⁇ D cost.

Speaker 1 And you still maintain,

Speaker 1 if you're like efficient, like some of the large players in the market, you're 4% GNA or you're at 6% or 8%. So your maximum leverage comes from sales and marketing, customer support and marketing.

Speaker 1 And you realize, well, what is that? That is the ability to...

Speaker 1 convince one customer that you're really good at what you do and be able to expand in that customer's environment with that trust that you can do more and more things for them.

Speaker 1 So that's the insight I came in with saying, why is it that Sarah's friend has 118 vendors who's in their infrastructure? Because each of them gets vetted. BOC.
This is security.

Speaker 1 It's not like buying some like random thing on the side. There's a big procurement process.

Speaker 1 And there's a testing process, right? Because you could be a badge security actor and it could take my whole enterprise down.

Speaker 1 So if you go through that entire process of validation, verification and trust, why shouldn't I take that right that the customer has given me, earn their trust and expand my capabilities across the platform?

Speaker 1 So that's the insight we came with and realized, you know, we walk in, we just sell firewalls. Hey, you want a firewall? No, I'm sorry.
I'm good for the next five years. Oops.

Speaker 1 I had a sales guy who was focused on the account. That was his target.
He spent three months getting a meeting, going through the process. Now he doesn't have anything to sell them.

Speaker 1 So I'm like, hey, you don't want a firewall? You want this? You don't want to get this? You want this? So eventually, enterprise salespeople have to have enough

Speaker 1 of an offering set that allows them to be able to sort of present something to the customer. Good news in cybersecurity.

Speaker 1 There's always some transformation going on because some technology is coming to its end of life form and the company hasn't innovated and you're off to the next set of companies.

Speaker 1 So that's the insight that came with. Yeah, it's a great insight.

Speaker 1 And if you actually look at the margin structure for some of these things that you mentioned, sales, customer success, et cetera, or support, a lot of those things now have AI apps that are making people dramatically more efficient.

Speaker 1 What do you do view the future of those sort of functions being very AI enabled and AI heavy, or how do you think about that transformation across companies like Sierra, Decagon, Rox, and others?

Speaker 1 Yeah, I think you. I'll answer the first half of the question.
The second half is your domain and Sarah's domain.

Speaker 1 I'm going to leave you to answer that question for him because I don't know the answer to the other companies.

Speaker 1 But I think, look, if you fundamentally look at it and look at organizational efficiency, perhaps perhaps that's where people are scared of AI-based efficiency.

Speaker 1 I seriously doubt that an AI agent will convince the CIO or CISO faster than my human that goes and hangs out with him and shows him the product.

Speaker 1 So I think my sales teams are very happy in their existence. They don't believe they're imminently threatened by AI.
So that's a good thing.

Speaker 1 Interestingly, on the product development side, I think people will be.

Speaker 1 Just to come back to that, sorry to interrupt you, but there's other forms of sales enablement, you know,

Speaker 1 a deck per customer that you customize or

Speaker 1 sort of SDR, sort of. Yeah, those are marginal.
That's process efficiency, which will come through it. But you know, guess what?

Speaker 1 Before we get there, I'm probably going to need 15 AI savvy people to build a workflow because an AI model is not just going to spit out a Palo Alto sassy proposal by itself.

Speaker 1 It's going to have to be trained. Do you think that's customer segment specific in terms of leverage you can get?

Speaker 1 So a lot of people think, for example, the SDR level seller where you're sending a bunch of emails out or you're sort of cold calling or doing things like that, where it's a lower ACV customer.

Speaker 1 Yeah, but I think that's only one side of it, right? Like if you stick to SDR, we'll go back to the other problem in a second, but stick to SDR.

Speaker 1 What point in time does my email start getting read by an agent, which is called the block all SDRs, block all people trying to sell me shit.

Speaker 1 Because I'm overwhelmed by the number of people who think they should write to the CEO because they have some company which has five people who are automating something. So I think there's a

Speaker 1 that's interesting. Does that actually decrease the effectiveness of those sales teams over time then?

Speaker 1 In other words, if effectively have agents screening things, does that create a block for certain types of sales leads?

Speaker 1 Well, I think the question, the sales lead is the

Speaker 1 isn't the issue, right? The question is generally, either I have a need, I know it, in which case, hopefully my blocking agent knows my needs and eventually says, ah, you know what?

Speaker 1 Actually, we've been looking for this thing and here's an email which satisfies our thing. So maybe that'll do it.
Or

Speaker 1 as we do in marketing, you never need a new watch and you never need a new car, but you just buy it because somebody put it in front of you. I don't think it happens that way in enterprise.

Speaker 1 So we can't generate demand for something we don't know we have a need, but sometimes you can in cybersecurity.

Speaker 1 So look, I think those will be marginal efficiency outcomes because how many STRs do you have eventually? You do the selling. We do lead generation.
We're in the large enterprise business.

Speaker 1 We go through a large, rigorous testing process. So yeah, I can get a phone call.
But if my product doesn't show up, then it doesn't matter how good the lead was.

Speaker 2 It's interesting because I haven't thought of it as just like becoming, you know, if we have agents doing this sort of low-level communication first, it just could become a more efficient market for finding the problem match.

Speaker 1 Right. Yeah, maybe.
So I think if you go back to the organizational efficiency question, I was talking to my CFO this morning.

Speaker 1 I was like, when are you going to take a whole bunch of stuff that you're doing with human beings and replace them in some subform of at least 50% efficient agentic analytics or apps?

Speaker 1 So I think we're going to get there. I think the max replaceability will be in, let's call it, administrative areas.
I have 200 people doing documentation.

Speaker 1 Do I really need 200 people doing documentation? If models can do 90% of the work, I probably need AI-savvy people to write the guardrails around these models and templates and they can print them.

Speaker 1 So we'll find find some efficiency. You know, pick your number: two, three hundred, four hundred basis points of efficiency.

Speaker 1 The larger you are, the better you are, because you're going to make save more money. I think sales doesn't change as much.

Speaker 1 I think product innovation is the better companies will innovate faster. I have technical debt.

Speaker 1 I'd love to get rid of a whole bunch of Salesforce code or a bunch of SAP code that I have and get it more efficient. I can't find good people to go do it because nobody wants to work on it, right?

Speaker 1 They're all working on AI. So, can I get some version of

Speaker 1 agentic AI apps that are are going to get rid of you know a few hundred of those people great so i can get inefficient

Speaker 1 product development ideas out of there but i'm not going to let go of any good product person because i'm going to get them to move faster because that's my competitive edge so i think we won't see as much attrition in the bike coding outcomes or the product development outcomes we'll just see faster innovation so basically the statements of ai displacing all of human labor and that's very overstated from your perspective relative i think on those two segments is i think customer support will have to go through the revolution and what I mean by that is I always joke internally I tell my people customer support exists because we build bad products

Speaker 1 if you have great products why would you have to have customer support it's complicated it's hard to onboard it's got too many dials and that's why it takes me so much time to make it and then eventually it's not efficient yeah so we have most recently moved customer support closer to our product teams.

Speaker 1 I told my product teams, the day you have a bug, or the day you have a customer issue with the product issue, fix that before you build a new feature.

Speaker 1 So I think in concept, from a North Star perspective, we should be able to take 80, 90% of customer support out in the next two to five years across the landscape.

Speaker 1 That's what every good company should aspire to do. And when you take it out, it means your product quality should get better.

Speaker 1 If I'm really using a good Vibe coding agent, why shouldn't it write better code? If it's writing better code, why shouldn't it find? flaws in my code much sooner. Why shouldn't it run simulation?

Speaker 1 So I'm expecting product quality gets better. I'm expecting the fixes come faster.

Speaker 1 I'm expecting that, you know, we can diagnose the customer's problem with data as opposed to humans calling and asking all those questions. So

Speaker 2 I think there's a lot of anxiety from engineering leaders that the product quality is not likely to get better. There's just going to be way more of it.

Speaker 1 More bad quality product or more quality? More just more product. Right.
That doesn't mean there has to be bad product.

Speaker 2 That's fair. Yeah.

Speaker 2 I think the combination of generated code that people are not fully understanding from an architectural point of view, fully reading and reviewing, and that volume overwhelming like the processes that we have today in software development is an anxiety.

Speaker 2 And

Speaker 1 let's bar step for a second. It is the worst right now that it's ever going to be.
Exactly. It's only going to get better.

Speaker 2 You're the most optimistic security person I know.

Speaker 1 That's great.

Speaker 1 Am I restricted to being optimistic in security?

Speaker 2 Well, that's not like a super, it's not a super optimistic group. I'm just saying.

Speaker 1 Look, I think at the end of the day, being one of them. We have to do best.
We have to do the best job where we can, and eventually life takes over. So it's fine.
But I think

Speaker 1 from a quality perspective, anecdotally, I have seen examples which have huge promise.

Speaker 1 I've seen a device coding agent find a vulnerability in security code upon Alto, which we wouldn't have found unless it was out in the wild, which is a good thing for us.

Speaker 1 I've seen it take 500 lines of code and come back with 75 lines of code, which would be much more efficient

Speaker 1 in doing the task that 500 lines of of code is doing. I've seen it come back with code explainability, which was written 15 years ago.
We can't find the person who wrote it.

Speaker 1 So there's some amazing examples of what the art of the possible is. Now, if that's just what we get today, I think one, two, three years from now, that stuff's going to get better and better.

Speaker 1 So from that perspective, do I believe product quality gets better? 100% believes that product quality gets better. I don't think there is any debate on that topic, right? Now, it doesn't.

Speaker 1 Look, there is no solution for stupidity in the world. So if you don't have good people reviewing this stuff, then yes, you can end up with bad outcomes.

Speaker 1 Not because AI is bad, it's because you didn't set up the right cardrails, the right process to do it. So, I think we're going to get good quality outcomes.

Speaker 1 I think we're going to see better and better capability. I think the most vulnerable area in any enterprise are large

Speaker 1 amounts of humans doing repetitive tasks,

Speaker 1 which are either generic, which is the easiest to replace because everybody's doing them. We're not new to them, or even if they're specific and high-critical maps.

Speaker 1 If I have 2,000 people in customer support, I should be able to optimize a lot of that and hopefully deploy those people in much more meaningful things.

Speaker 1 I mean, imagine, I have a child, you have children. Do you really want them to grow up and become customer support people? And so, every time, what do you do?

Speaker 1 I just pick up the phone and listen to somebody grumbling at the other end because something they bought is not working. That sounds like a really horrible job.
So, I think those jobs should go away.

Speaker 1 What's wrong with that?

Speaker 2 And they don't have the power to fix it.

Speaker 1 Yes, that's worse. It's like you know, it's getting punched in the face and not being able to punch back.

Speaker 1 If somebody's taught them, customers are always right.

Speaker 2 Can we talk a little bit about leadership?

Speaker 1 You're a very unique leader.

Speaker 2 So this is a time of like at least a handful of companies growing very quickly because they create

Speaker 1 trades in billions of dollars.

Speaker 2 Yeah, yeah. Well, and like, you know, an optimist myself, I'd say like

Speaker 2 AI rappers are not, they're creating a lot of value and consumers and enterprise are buying very quickly. Yes.
Google from 2004 to 2014, Palo Alto, seven years in.

Speaker 2 You've added like the size of a Palo Alto originally every single year since in terms of enterprise value.

Speaker 1 That's wild.

Speaker 2 Like what, what makes you a great leader in terms of growth at scale?

Speaker 2 Like what do you, what advice do you have for some of these people who are like, Deanna spoke of world records in terms of growth the first few years.

Speaker 1 Look, if you step back, it's interesting.

Speaker 1 Every business that you've identified or you've looked at, we've talked about, has a much larger TAM. than any of these companies is able to touch.

Speaker 1 The markets are growing. You have the opportunity to take a share and grow in that market.

Speaker 1 so i'm a huge fan of growth businesses i hate the idea of going restructuring something which is on a declining curve it'd be sort of scare me so it's good to find the right market from a growth perspective i think um i always have this principle that nobody wakes up in the morning and goes to work to screw up or you wakes up saying oh i went on my worst job possible no chance in hell you know you can find people you can get a group of people together they can be innovative as hell and go put you know a rocket into space faster than nasa these are all humans they're all people out there there's no difference than many of them than people who work at palo altar the people who work at google or elsewhere so what creates the difference between great companies and companies that are not uh as good because i'd say within reason the people

Speaker 1 you can find those people in every company i think it boils down to understanding the market setting the right note start getting enough buy-in talking about the why, not just the what you need to get done, and getting people really excited and bought into it.

Speaker 1 And then

Speaker 1 making sure they have the resources to get their task done.

Speaker 1 If you do that, then my job is to say set the strategy, set the North Start, put the right people in place, and then basically act as their shield and keep blocking bad things or friction from slowing you down.

Speaker 1 So if you can do all of those things in a way, you know, there is a high probability you can create good outcomes. Never guaranteed.

Speaker 1 Are there any unique structures or approaches or tactics that you do that go against the grain? We've talked to Jensen a few times and he's pointed out, for example, he has like 40 direct reports.

Speaker 1 He doesn't do one-on-ones.

Speaker 1 I actually read that. I actually tried that.
I actually expanded my staff meeting from eight to 25 after I read that.

Speaker 1 It's interesting. And it solves a different problem.
So at least from my case, I've discovered that I'm not always sure that these people communicate the why to their teams.

Speaker 1 It at least eliminates one level of confusion. Why does he want this? Oh, actually, I heard him directly.
This is what he wants to get done.

Speaker 1 Because sometimes, you know, you have this notion of you have to be player, you have to be coach. Sometimes you have to be directive.
Sometimes you have to be, you know, encouraging.

Speaker 1 It's like, we're going to climb that mountain. We are going to climb that mountain.
And it's like, you know, if you go back and say, we're going to climb a mountain promise, people end up on that one.

Speaker 1 So it's important for people to understand the communication parts. And I've discovered that communication actually is underrated in organizations.
And usually

Speaker 1 the way I do my sort of call it 360

Speaker 1 degree test is I meet 50 employees every two weeks and ask them questions.

Speaker 1 Then I discover, oh my God, these people are asking questions, which I thought was abundantly clear why we're doing this, what we're doing this.

Speaker 1 And I discover eventually that by the time you get to the person four or five levels removed from you, they actually don't understand exactly why we're doing certain things.

Speaker 1 They have fundamental questions around what we're doing. And that causes them to do it differently or not do it.
So that becomes sort of an issue of communication.

Speaker 1 You asked me about, you know, what do we do? What have we done differently compared to other people? I think communication, talking to people, making sure they're all bought in.

Speaker 1 But I think from a business strategy perspective, we've taken a very different approach to MA.

Speaker 1 We bought 27 companies so far. We're about to buy our largest one if we get approval to get it done.
And

Speaker 1 I don't call it MA. I call it product development and research in a highly innovative market.
where all of you guys are kind enough to support innovation. It's distributed R D exactly right.

Speaker 2 It's distributed R D We're in the service for you.

Speaker 1 Well, you guys do reasonably well for that. So, you know, you can say thank you and I can say thank you.

Speaker 1 It's like, if you don't win and I don't win, if you don't win, I don't win. It's fine.

Speaker 1 So we're happy to be in a situation where you get to a certain stage and we get it past that stage and take it to scale. Because

Speaker 1 I don't believe we have all the smarts in the world. There are lots of smart people out there who are trying to solve different problems.

Speaker 1 And I don't believe we can solve all the problems simultaneously. We take AI.

Speaker 1 You know, we started off saying, oh, my God, when AI gets deployed, enterprise is going to want to make sure that their AI instance is sequestered and controlled and managed because they don't want data leaking, they don't want, you know, external inputs.

Speaker 1 So we built effectively what we called an AI firewall. It's great.
Have you discovered this company? You know what?

Speaker 1 People actually trying to figure out these models that they have are hackable, have malware, and then we weren't doing that. We were just protecting them once they were in there.

Speaker 1 So this company called protect.ai, which was actually assessing models. They were doing persistent red teaming against the AI models specifically, not just the enterprise.
Because,

Speaker 1 you know, going back to your government red teaming,

Speaker 1 models morph.

Speaker 1 Their responses are non-predictable, non-deterministic. The same model could answer the same question one way today and could answer it differently one week later because it learned.

Speaker 1 Now, if you start getting non-predictable responses, you have to inspect all the responses to make sure none of them is malware. Yeah.
Right.

Speaker 1 So, from that perspective, we do persistent red teaming of models. We do scanning of models.
So, we said, oh, shit, we haven't done it.

Speaker 1 We found this company and we said, thank you very much to the VC community for delivering that from an R D perspective and put them part of our platform. So we do rely on,

Speaker 1 like you said, R ⁇ D as a service from the VC community. And that's been very helpful.
But we always go for number one or number two.

Speaker 1 We never believe that you can take three or four and spit in China and make it look like one or two because one or two don't go away. They treat as one or two for a reason.

Speaker 1 We, more often than not, make the leaders of the acquired companies.

Speaker 1 the leaders of our business because we believe they've acted faster than us in a much more resource-constrained environment, shown tremendous amount of resourcefulness and hustle to deliver the outcomes and innovation.

Speaker 1 So, I think from that perspective, we probably have a higher hit rate in the industry than most other MA that has happened in the enterprise space. So, I think we do that differently.

Speaker 2 Yeah.

Speaker 2 You, at the beginning with Palo Alto and continue today, like have been perhaps more ambitious than any other security company.

Speaker 1 Okay.

Speaker 2 I think that's right.

Speaker 2 How do you convince an organization to be more ambitious? Because,

Speaker 2 my understanding of the cyber industry before is you had like endpoint businesses and firewall businesses, very sort of domain specific, right?

Speaker 1 I don't think you have to convince humans to be more ambitious. I think we are natively and naturally ambitious.
Like you meet somebody saying, can you do more?

Speaker 1 I've never heard someone say, I think I'm done.

Speaker 1 Like everybody says, I want more. I can do more.
Like we live in a consumptive society. We're all taught to aspire for more.
So I don't think it's hard to make people at Palo Alto feel that we

Speaker 1 have the right to play at a bigger table on a constant basis. And people actually like the idea of ambition and aspiration and winning.

Speaker 1 Like, you know, trust me, if our stock wasn't up six or seven times last seven years, a lot more people internally would have questions on our strategy than they have now.

Speaker 1 So, I think it's a self-fulfilling prophecy. It's a good thing across the board.
But I think more fundamentally, if you step back,

Speaker 1 our industry is not fully formed. It has 118 vendors, it's fragmented.

Speaker 1 You know, you take a look at the CRM industry, look at the ERP industry, look at the HR industry, these things operate on singular platforms, right?

Speaker 1 Nobody has two sales forces deployed in an enterprise. Nobody has two workdays deployed in an enterprise.
Nobody has two SAPs deployed in an enterprise. Why?

Speaker 1 Because you need end-to-end visibility, a singular workflow, a singular set of analytics to solve the problem. Our industry has started off because, oh my God, we have a threat, block the threat.

Speaker 1 So like we're playing vacuum hole.

Speaker 1 So the point is, as I said, over time, as these requirements normalize, they somewhat sort of, you know, what do you say, converge in the capabilities of vendors, then how does it matter if you take from one versus the other?

Speaker 1 And what becomes more important?

Speaker 1 All these platform companies I talked about, it's not like they have unique features on a feature-by-feature basis compared to their competitors. Over time, those have normalized.

Speaker 1 But what they do have, they have an end-to-end visibility and capability that integrates the functionality. That's why they're there.

Speaker 1 So if cybersecurity has to survive in the long term as a mature industry, we also have to become sort of a singular enterprise platform. If you believe that, we're nowhere there, right?

Speaker 1 We've taken, we used to have 24 products when I came. We had four when I came.
We took it to 24. We had 44 magic cordons, top right mentions.
We've turned that into three platforms.

Speaker 1 I say, you know, if you're...

Speaker 1 going on the journey with us it's going to take you two to three years to get a platform deployed we're three right now because had we said one oh my god oh my god your friend can't go from 118 to one it's like it boggles mind

Speaker 1 So let's take him to three first or one or three. And hopefully we get him to the next one.
We get him to the third one.

Speaker 1 So I think the idea from our perspective is if we can become the platform of choice in the industry, that's a very big ambition, a very big North Star.

Speaker 1 But it's like, you don't get there if you don't start.

Speaker 2 Maybe when you look forward, both... Okay, Palo Alto, cybersecurity, AI, three questions.
What keeps you up at night? Do you think most about?

Speaker 1 I think most about AI.

Speaker 1 Not.

Speaker 1 I'm glad we think about

Speaker 1 something you're doing.

Speaker 1 I think more about AI from the vantage point that

Speaker 1 if our view of the world, of how this is going to evolve,

Speaker 1 is not within the guardrails of where it's going to be, we may end up taking paddle also in a different direction. Because remember, we exist.

Speaker 1 to help you secure technological advancement in a certain direction before you're fully deployed. I want to give, like, you know, today our conversation with some of the big cloud providers was,

Speaker 1 how is everybody thinking about agentic?

Speaker 1 I'm supposed to secure agents. The problem is, I can't get one person to agree with the other person's definition of an agent.
I'm like, what's an agent?

Speaker 1 Well, are you going to use MCP protocols to deploy? Well, no, we just have connectors. What's a connector in an LLM? A connector is effectively API call, microservices call.

Speaker 1 Or are you using API calls? It has to be called API calls in the past. Why aren't you using MCP server clients and clients? Well, we're going to get there, right?

Speaker 1 Well, are you how are you going to do the inspection of identity? We're going to register the identity somewhere else. Like, what's an agent? How are you going to run it? Is it going to be delegated?

Speaker 1 So, there's like so many questions from an execution perspective, from how the industry evolves. That

Speaker 1 this kind of quite keeps me up in mic. We talk about this every day.
We have a team of people getting together every day for two hours, and we read everything.

Speaker 1 We didn't talk about it, saying, What do you think about this? What do you think about that?

Speaker 1 Because we don't have an expert, then hopefully, the collective wisdom of six or seven smart people, those people who I bring together every week, every two, three times, is probably better.

Speaker 1 So we're constantly trying to paint a picture of how the world of AI is going to evolve. So we're building our opinion.
And based on that, we have to design a product.

Speaker 1 So that's extremely bleeding edge,

Speaker 1 right? And if we want to be the cybersecurity partner of choice, we have to be able to go with the bleeding edge capability and tell our customers, look,

Speaker 1 you have a problem. We're solving it faster than anybody else.
And we can help you deploy AI securely. So that's kind of what

Speaker 1 it's kind of exciting, a little bothersome because, you know, you got to keep understanding. You're going to be like, holy shit, were they just here? Where we do that now? Yeah.

Speaker 1 As you've been thinking deeply about AI, have you thought about it from a broader societal perspective, impact of the world?

Speaker 1 Like there's, are there specific threads that you think about most or worry about most or optimistic about most? I'm going to stick with Sarah's.

Speaker 1 characterization of me as one of the more optimistic people about these things. I'm going to expand that beyond cybersecurity.
Like, I think it's exciting technology.

Speaker 1 You know, you've been in Silicon Valley for a long time. You've been in Silicon Valley for you've been investing for so long.
The intensity, the excitement is palpable, right?

Speaker 1 Like, you can't turn around. It's really fun again.
Yeah. It's really fun again, right?

Speaker 1 Like, you're, you're like, you can have dinners, you can debate all kinds of arcane topics, and I'm pretty sure you can find lots of people with different opinions.

Speaker 1 And there's so many different directions you can go to. You can talk about policy implications, you can talk about wrappers, you can talk about LMs, you can talk about infrastructure.

Speaker 1 It's like a whole new technological wave, which has so many implications and majorly disrupted implications. So, from that perspective, I think it's exciting.

Speaker 1 Does every technology come as a double-edged sword? Of course, it does.

Speaker 1 Every revolution in history has come as a double-edged sword. So, why not this one?

Speaker 1 You guys have to believe in the power. There's more good people in the world than bad people.
The good people will hopefully continue to make sure that the bad things get controlled.

Speaker 1 Now, is it going to, are we, is some bad things going to happen? Most likely.

Speaker 1 Are we going to find a way around it? Most likely, and hopefully.

Speaker 1 We power through a pandemic. We're crying out.

Speaker 2 Awesome. Thank you, Nikesh.

Speaker 1 Yeah, thank you so much. Thank you for your time.