Copilot, Agent Mode, and the New World of Dev Tools with GitHub’s CEO Thomas Dohmke
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ThomasDohmke
Show Notes:
0:00 Introduction
0:37 GitHub Copilot’s capabilities
4:12 Will agents replace developers?
6:04 Copilot’s development cycle
8:34 Winning the developer market
10:40 Agent mode
13:25 Where GitHub is headed
16:45 Building for the new challenges of AI
21:50 Dev tools market formation
29:56 Copilot’s broader impact
32:17 How AI changes software pricing
39:16 Open source vs. proprietary APIs
48:01 Growing up in East Berlin
Press play and read along
Transcript
Speaker 1 Hi listeners and welcome back to Know Priors. Today we're joined by Thomas Dunk, the CEO of GitHub, a platform used by over 150 million developers worldwide to collaborate and build software.
Speaker 1 As CEO, Thomas has overseen the development of tools like GitHub Copilot.
Speaker 1 Before becoming CEO, he helped shape GitHub's product strategy and power its global expansion and previously worked at Microsoft.
Speaker 1 In this episode, we'll talk about the future of software development, the role of AI and coding, open source, and product plans for Copilot. Thomas, welcome to No Priors.
Speaker 1 Maybe we can start with the meat of it. What is happening with Copilot and the new releases at GitHub recently?
Speaker 2
You're heading straight into it. We're really excited about making Copilot more agentic.
A few days ago, we announced agent mode in Copilot and VS Code.
Speaker 2 So instead of just chatting with Copilot, getting responses and then copy and pasting the code into the editor or using auto-completion, auto-completion, the original Copilot feature.
Speaker 2 You can now work with an agent and it helps you to implement a feature. And when it needs to install a package, it shows you the command line terminal commando and you can just say, okay, run this.
Speaker 2 You're still in charge, right? So that's the crucial part of
Speaker 2
these agents that we have available today. That as the human, you still, as the human developer, you still need to be in the loop.
But we also showed a teaser of what's about to come in 2025.
Speaker 2 We call this project Paravan, because it's like a jedda Jeddah in a paravan. You got to have patience and you got to, you know, learn how to use the force.
Speaker 2 But we think, you know, in 2025, we get into a place where you can assign a GitHub issue, a well-defined GitHub issue, to Copilot, and then it starts creating a draft pull request and it outlines the plan and then it works through its plan.
Speaker 2 And you can, similar to how you observe a co-worker, you can see how it commits changes into the pull request and you can review this and
Speaker 2 provide feedback to Copilot. And so
Speaker 2 Copilot basically graduates from a pair programmer to a peer programmer that becomes a member of your team.
Speaker 3 The obstacles to that right now are some new model advancements. Is it just building out some other core technology? Is it just the UI? Like, what is keeping that from happening right now?
Speaker 2 Yeah, I think the first thing is the model, the full O3 model that's not available yet, that OpenAI showed as part of the shipments right before the holidays. We're going to see improved reasoning.
Speaker 2 And I think it's as the models get better in reasoning, we're we're going to get closer to 100% of this vBench, which is that benchmark out of 12 repos,
Speaker 2 open source Python repos, a team in Princeton identified 2,200 or so issue pull request pairs. Effectively, all the models and agents are measured against.
Speaker 2 So that's number one, the model and the agent combination. I think the second piece is just figuring out what's the right user interface flow.
Speaker 2 If you think about the workflow of a developer, right, you have an issue that somebody else filed for you, you know, user
Speaker 2 product manager, or something that you filed yourself. Now, how do you know whether you should assign co-pilot to this, the agent to it,
Speaker 2 or whether you need to refine the issue to be more specific, right? It's crucial that the agent is predictable, that you know that this is a task that the agent can solve.
Speaker 2
If not, then you need to steer it. So steerability is the next thing.
You need to either extend the definition
Speaker 2 or the agent needs to come back to you and ask you additional questions. And then at the end of the process, you want to verify the outcome.
Speaker 2 And so, in our demo, that's where we're thinking the right flow here is actually that the agent works in a pull request, like similar to a human developer with lots of commits.
Speaker 2 And then you can roll back those commits or check them out in your VS Code. We saw that with some of the agents that are available:
Speaker 2 do I, as a developer, actually tolerate the agent? Like, is it actually saving my time or is it wasting my time?
Speaker 2 And the more often you see it wasting your time and just burning compute cycles, the less likely you're going to use it again.
Speaker 2 And so if you're predictable, stirable, you know, verifiable and tolerable, if we get to that for all four criteria to a certain level, I think we're going to see a wide adoption of agents.
Speaker 3 How far away do you think these agents are from being sort of the median programmer equivalent? And then how much longer do you think it takes to get to sort of superhuman?
Speaker 2 You know, I thought about this this morning, right?
Speaker 2 Like what if regardless of what agent you're thinking of, a travel agent or a coding agent, or maybe it's an agent that designs your house, the fundamental challenge is actually the same as you have as a human developer, right?
Speaker 2 Like you have this big idea in your head and you can sketch it on a whiteboard, but then you want to start coding and you have to take this big idea and break it down into small chunks of work.
Speaker 2 I think that's the part where we're far away from agents actually being good enough to take a very rough idea and break it down to small pieces without you as developer or as architect or even when planning your travel, constantly getting questions back of what decisions you want to make, what database, what cloud.
Speaker 2 Like, imagine you give the agent a task saying, you know, build GitHub or build a mobile app or something. Like, it will just be not specific enough, right?
Speaker 2 So that's the systems thinking that I think the media developer will not be replaced by agent.
Speaker 2 And the flip side of that is a lot of, you know, what developers do is just picking up issues and fixing bugs and finding where to fix the bugs, adding a feature that comes from a customer.
Speaker 2 And then you have to navigate the code base base and figure out what files you have to modify. And I think there we are going to see dramatic progress over the year.
Speaker 2 We actually, you know, when we recall the demo for the Paravan project,
Speaker 2 we actually had one of our product managers use an issue and the agent create the pull request themselves, right?
Speaker 2 And so a PM that usually doesn't code and doesn't write code in the code base was able to use the agent to create a real pull request that was then reviewed by the developer and merged into the code base.
Speaker 2 So in some ways, we're already there. In other ways, we need to get to the the point where you trust it enough that you're using it day in and day out.
Speaker 1 I'm sure you guys were doing a bunch of dog fooding before releasing agent mode and Hadawan as well.
Speaker 1 Maybe if we just sort of zoom out from that from the eval phase, like, can you describe what the overall development cycle is for Copilot today?
Speaker 1 Like, how you do planning and make decisions about what to try and how you improve it?
Speaker 2 The industry calls now AI engineering, which is, you know, we extended the full stack of back-end and front-end development with AI development.
Speaker 2 And so, how do we use a new version of a model or new model? As we have now, the model picker and Copile, we're constantly dealing with multiple models from multiple vendors.
Speaker 2 How do we integrate that into our stack? We have an applied science team that runs evaluations.
Speaker 2 We have a team that builds out these benchmarks
Speaker 2 that the applied science team uses to compare models with each other, but also the teams that build new features like code review agents or the SV agents or agent mode users to validate their work as part of their test suite.
Speaker 2 So it's no longer just the data scientist and the engineer that those roles have
Speaker 2 more and more overlap and they're collaborating day in and day out.
Speaker 2 We do a lot of experimentation with A-B testing where we flight new versions or new fine-tuned versions of a model
Speaker 2 after the offline test, in an online test, first with the GitHub and Microsoft employees and then with sets of the population. And then overall, obviously we have a roadmap of features
Speaker 2 that we want to build in a long backlog, not just for Copilot, but all up for GitHub, right? Like GitHub is turning 18 this year.
Speaker 2
I think it's 18 years since the founders in late 2007 started working on it. And then it launched in early 2008.
And Microsoft turns 50 actually
Speaker 2 April 4th. And so we have a long backlog of customer feedback where we're using Copilot to build those features in an agent mode now
Speaker 2 to accelerate our feature delivery.
Speaker 2 But at the same time, the market is moving so fast, whether we're meeting with OpenAI or with NTOPIC or with Google, we learn about new model versions and then our roadmap changes from one day to another.
Speaker 2
I'm sure you guys are seeing that as well. The market is moving so fast.
We're literally sitting on an exponential curve of
Speaker 2 innovation that is hard to keep up. and you can't really plan more than a month or two ahead of time.
Speaker 1 What do you think about competition being on that exponential curve? I think it is wild to think that
Speaker 1 Sweet agents, as you describe them, didn't exist as an idea a year ago.
Speaker 1 We now have a market full of folks experimenting with these products.
Speaker 1 How do you think about winning the like GitHub is obviously a very dominant force overall, as is Copilot, but how do you think about winning the developer over what they care about in that changing and competitive market?
Speaker 2 The way we think about winning is that we care deeply about developers.
Speaker 2 And that's always been the heart of GitHub is that we put developers first and that we are developers that are building products for developers.
Speaker 2 We have the saying at GitHub is that we're building GitHub with GitHub, on GitHub using GitHub.
Speaker 2 And so everything that we do in the company, including
Speaker 2 our legal terms and our HR policies and product management sales, sales enablement, all these functions are in GitHub issues and GitHub discussions and GitHub repos.
Speaker 2 So I think that's number one: that we deeply care about our own product and we're using it for everything day in, day out.
Speaker 2 You know, the first thing I do in the morning is open the GitHub app on my mobile phone and then Slack, as a lot of our operations company chat runs through Slack.
Speaker 2
Number two is, you know, you mentioned competition. I mean, it's obviously like they've never seen anything like that in the developer space.
It's the most exciting time, I think, for developer tools.
Speaker 2 You know, I've been a developer for over 30 years. It's amazing to see the innovation, you know,
Speaker 2 the news that is coming out every day. And I think that energy, you know, that is in the market and innovation driven, both on the open source side and on the closed source side.
Speaker 2 Let's not forget that it's not one-sided. As much as there's innovation on proprietary models and software, there is an equal amount of innovation in open source and on GitHub.
Speaker 2 And that energy obviously gravitates into us. I'm a big Formula One fan.
Speaker 2 It's good when there's competition because the races are so much more fun to watch if there's multiple teams that can win the championship. And I think the same we feel about the competition.
Speaker 2 It gives us motivation every single day when we wake up to do better, to move faster and to ultimately win with the best product in the market.
Speaker 1 You have such rich data about how people are actually using Copilot. What is surprising you even from the last week or so since Agent Mode was released?
Speaker 2 The thing that always surprised us from the early days was how much code Copilot is writing.
Speaker 2 Some of the folks from Microsoft and GitHub on your podcast in the past and in the early days, you know, soon after we launched Copilot Preview, it already wrote like 25% of the code.
Speaker 2 And I remember that meeting where we looked at this in a product review and I said, that must be a mistake in the telemetry. Go back and validate that.
Speaker 2 It can be true that it's writing 25% of the code because it was just auto-completion. And
Speaker 2 as cool as that was, at the same time,
Speaker 2 it still made a lot of mistakes in the early days.
Speaker 2 But it quickly dawned on us that A, the number is true, and B, that's just the learned behavior of software developers, right? Like you're typing something, you're always reaching the point where
Speaker 2 you need to look something up.
Speaker 2 So you go to your browser and you find code on Stack Overflow or on Reddit or blogs or on GitHub, and then you copy and pasting that, and you're modifying it anyway afterwards, right?
Speaker 2 Like that's the inner loop is always this kind of like you write something, you try it out with
Speaker 2
the compiler and the debugger, and then you keep modifying until you make it work. And that number, you know, then quickly rose to around 50%, depending on the programming language.
If you look now
Speaker 2 with these agents, it's hard to measure that because when you can literally go into agent mode and say, I want to build a snake game in Python, and it writes all the code for you, right?
Speaker 2 Like it writes multiple files, so the denominator becomes zero, right? Like it's like infinite percentage because
Speaker 2
the only thing you wrote was a prompt. And the 15-minute demo from two years ago is a one-minute demo now.
And I think that's know,
Speaker 2 still surprising in many ways that we are already so far ahead on that curve.
Speaker 2 And then the opposite is also true, right?
Speaker 2 You can you can get it into a place where, you know, it just keeps rewriting the same file or deletes the whole file because it gets stuck somehow in the logic. And
Speaker 2 so it grounds us also in the reality is like we are not close to an agent just autonomously passing through all my GitHub issues and then fixing all my backlog for me.
Speaker 2 The only thing I'm really doing is just validating and becoming the code review human for the software development agent, right? Like, so we are in this,
Speaker 2 you know, we're swinging between the excitement of how much it can already do and the reality where it gets stuck in very simple scenarios, where it's like, you're trying to kind of like figure out the prompt of telling it,
Speaker 2 just do the one thing and then you just go into the file and change the whatever, the background color yourself.
Speaker 3 That makes sense. Outside of a lot of the agentic efforts that you all are doing, and obviously I think that's amongst the most interesting stuff that's happening right now.
Speaker 3 What are other big areas you want GitHub to evolve over the coming few quarters? Like, what are there other big thrusts or is it all kind of all in on AI and that should be the focus of the company?
Speaker 2 Oh, so far we only talked about the generic Speak agent where you can assign an issue and
Speaker 2 it generates a pull request. But if you actually look in the developer life, the day-to-day, you know,
Speaker 2 in most companies, that's maybe two, three hours of your day that you're actually writing code.
Speaker 2 And then you're spending an equal amount of time of reviewing code of your coworkers. And while we don't believe that goes away from a pure security and trust perspective, you always want to have
Speaker 2 another human in the loop before you merge code into production.
Speaker 2 At the same time, we believe code review agents and code review is a big topic where AI can help you, especially when you work with the distributor team in different time zones, where you don't want to wait for the folks on the West Coast to wake up to get an initial loop of feedback.
Speaker 2 So I think code review is a big topic for us. And again, the AI part is one piece to that, but the user interface is equally important.
Speaker 2 If ideally you get feedback and then you can work with the code review agent on that feedback to loop, because you won't always get exactly the right feedback to just click accept, except, accept.
Speaker 2 You have to have a user interface, a cloud environment where you can just open this. If you always have to
Speaker 2 clone the repo on your local machine and then install the dependencies, switch to a different branch, you're still having way too much boilerplate work.
Speaker 2 So moving to a cloud environment where you can just
Speaker 2 try out the changes that came from code review and can modify them to make them work and have that fast outer loop in that same realm of security vulnerabilities, which is A, we want your code scanning not only find vulnerabilities, but also fix them.
Speaker 2 An even simpler version of that is linter errors, like code formatting and those kind of things, hopefully all go away and just the AI fixes all that instead of you going through 100 linter warnings telling you where to put the spaces in the parentheses.
Speaker 2 But also, if you look in any decent size software project, it has outdated dependencies, it has lots of known software vulnerabilities, hopefully non-high risk and a lot of them low risk or where somebody decided that's not actually crucial to fix right now because the code is not reachable or we have other priorities.
Speaker 2 Having AI to burn down that security backlog will make both the open source ecosystem and a lot of commercial software projects so much better because it brings that you know effort down that every engineering manager swings back and forth between you know the tech debt, the legacy code, you know, the security, accessibility, European regulation, whatever, right, and the innovation backlog.
Speaker 2 And there isn't really like a balance between the two. It's just like what is the most urgent issue, the most the biggest fire draw.
Speaker 2 Is it your sales team telling you if we don't get that one feature, we can't sell the product?
Speaker 2 Or is it the security team telling you you got to fix that one issue, otherwise we're going to flag you up to the management chain right and so we that's i think is the ai the ai side of things but similarly github as a platform needs to evolve to support or have all the primitives for these agents and the ai to work in in tandem with the human do you think there are um problems that people are not addressing yet that emerge from this transition how software development is done right like so for example you know you feel like we're somewhere between crossing the tipping point of the majority of code being generated this year to maybe like all of the code in like some cases or some tasks.
Speaker 1 How does that change like testing or, you know, the way we should look at technical debt or any of that?
Speaker 2 Typically, I don't think all of the code is written by AI. I think the way.
Speaker 2 this will work is that we have two layers. We have the machine language layer, you know, which is Python or Ruby or Rust, right?
Speaker 2 Those are effectively abstractions of the chipset and the machine instruction set. And that's the last layer that's deterministic, right?
Speaker 2 Like programming programming language inherently, when I, it does exactly what I want it to do. And then human language is inherently non-deterministic, right?
Speaker 2 We can all, the three of us can say the same sentence and mean a different thing.
Speaker 2 And so while we will use human language to describe a lot of the features and behaviors that we're going to build, we will still have the programming language layer below that that we are going back and forth as engineers to figure out
Speaker 2 is the code that was written by AI actually the correct one? Is it the one that aligns with my cost profile, if you will, as an example, right?
Speaker 2 Like at the end of the day, we're still all running businesses that have to have positive profit margins. I think we're going to, as engineers, have both of these layers.
Speaker 2 And we're heading into a world of more human language
Speaker 2
and less programming language. But at the same time, we are in a world where lots of financial services institutions still run COBOR code on mainframes.
And we are very far away from just taking that
Speaker 2 code that's 30, 40 years old and just running an agent that transforms that magically into a cloud application, right? Like, I think that's coming, but it's like self-driving cars
Speaker 2 are coming as well, but we don't know when that
Speaker 2 cutover point actually happens where you can have a car without a steering wheel and it drives you everywhere, you know,
Speaker 2 within the country you live in, right? Like
Speaker 2 it works for Waymo in San Francisco and it doesn't work for Waymo all the way down to SFO to San Jose yet, right?
Speaker 2 And so the scope will increase, but we are far away from, I think, solving all the tech debt and all the legacy code that exists.
Speaker 2 And so we are still, I think, for like a decade or so, at least, going to have software developers that work in lots of old school, you know, PHP code and co-war code and all that stuff.
Speaker 2 While at the extreme other end of the spectrum with web development and AI, you're going to be able, and we are already there.
Speaker 2 Like, you know, just look at a 10-year-old, give them, you know, a tool like Copilot or, you know, Replit, Bolt, you name it, and have them type a couple of prompts and have them explore how that works and how they can, similar to Stable Diffusion Mix Journey, render software themselves and iterate on that.
Speaker 1 You yourself lead a large team of software engineers. As you said,
Speaker 1 you have more human language and instruction versus machine language. Does it change what you look for or what you want to develop in your own team?
Speaker 2 Well, what are you looking at right now is I think this, how do you describe actually a problem specific enough that an agent can pick it up, right?
Speaker 2 Like basically the planning and tracking side of software development, the issue, right? That's often the biggest challenge that you have as soon as you have a decent team size.
Speaker 2 Or like 10 person startup has no problem. And most of 10 person startups don't have a product manager.
Speaker 2 The founder is the product manager and the rest is just building the stuff. And if you have a problem to solve, you have very short communication paths.
Speaker 2 If you have a thousand engineers, their biggest problem is what do you want to build? How do you build it? What did you actually mean when you wrote up this thing?
Speaker 2 And if you look into that space, there isn't much AI helping you yet.
Speaker 2 We have been in the early phases ourselves with that, with Copilot Workspace, where we have like a spec and a brainstorming agent that basically looks at what you wrote in a GitHub issue, compares it with a code base, and describes you the before and after in human language.
Speaker 2 And then you can, similar to a Notion Doc, just modify that and basically add stuff to the specification.
Speaker 2 So I think that's going to be a whole set of agentic behavior that we're going to bring into the product management space.
Speaker 2 Similar for designers, right? Like today,
Speaker 2 a lot of designs are hand-drawn in Figma.
Speaker 2 I think tomorrow you're going to, as a designer, type effectively the same specification as a product manager, and you have an AI to render the code for the wireframes and then apply grounding out of your design system to make it look like your product, right?
Speaker 2 And so those disciplines get closer to each other.
Speaker 2 And a product manager will be able to, if they're good in writing a specification, create the whole change set and the designer will be able to take over part of the product management role and the engineer gets closer to these other roles as if you know if they're good in describing the feature can can kind of like take over that part as well.
Speaker 2 So I think that's where a lot of innovation is going to happen in rethinking how the traditional disciplines in a software engineering team are evolving in the coming years as we have more and more of these agents available and they're actually good at what they do.
Speaker 3 As you think about these different agents and these different use cases, do you think it's going to be the same company or product that provides all three?
Speaker 3 Do you think it's going to be one interface? Is it going to be a different interface?
Speaker 3 I'm sort of curious how you think about the actual flow in terms of very different users in some sense, although with some overlapping either responsibilities or goals.
Speaker 3 And what are the set of tools that they interact with? And is it a singular tool? Is it many? Is it one company? Is it many? Where does it launch out of? Like, how do you think about all that stuff?
Speaker 2 One of our strongest beliefs at GitHub is developer choice. And, you know, imagine a GitHub as a platform where you had only JavaScript libraries available or only React available to you.
Speaker 2 And we would tell you that's the only open source library you need to build an application.
Speaker 2 There would be a set of users using React,
Speaker 2 using GitHub because they love React, and the rest would go somewhere else because some other platform would offer them all these other open source components.
Speaker 2
In AI, I think we're going to see the same thing. We're going to see a stack or universe of companies that offer different parts of the software development lifecycle.
And developers pick the one that
Speaker 2
they like the most, that they have experience with, and that convicted are the future. A lot of that is part of a belief system.
Programming languages in many ways are
Speaker 2 very similar.
Speaker 2 And then if you look at the discussion between developers, you have get the feeling they're very different to each other. Like at the end of the day, they're all compiling down to
Speaker 2 an instruction set that runs on your Apple 4 chip or your Intel CPU or AMD or Nvidia or whatever.
Speaker 2 So I think we are going to have a stack of different tools, and there's going to be companies that offer
Speaker 2 all the tools. Well, not all of them, because you're never going to have all of the developer tools out of one hand anyway.
Speaker 2
Think about GitHub. We have a big platform, but then you still have an editor and an operating system and a container solution and a cloud that doesn't come from GitLab.
Like
Speaker 2 HashiCorp, Terraform, or Vault is an example, or Vercella and Next.js as another example, right?
Speaker 2 There's like going to any random company in the Bay Area, and they're all going to have a different stack of tools that they have combined because they believe that's the best stack for them at this point.
Speaker 2 So I think in this AI world, we're going to see the same thing. You're going to have choice of different agents.
Speaker 2 We're already there where you have choice of different models. And
Speaker 2 some believe the cloud model is better, others believe the OpenAIS model is better.
Speaker 2 The reality is somewhere in the middle, and different scenarios are better with different models. And I I think the same will be true
Speaker 2 in this agentic future that we're heading into.
Speaker 3 Is that true given the generalizability that we're seeing?
Speaker 3 In other words, if you were to remove X percent of the models and you just got stuck with one of the ones you mentioned, up to a point, you'd still be extremely happy given the relative capabilities we had four or five years ago.
Speaker 3 In other words, it's a little bit of like we have so many great options and some things are better than others. But fundamentally, any one of these things would be spectacular
Speaker 3 by any sort of baseline metric.
Speaker 2 It depends on what end state we're talking about, right? Like if the singularity is coming, then none of that matters.
Speaker 3 Five years from now, five years.
Speaker 2 You know, we started Copilot almost five years ago, June 2020.
Speaker 3 And that was what? GPT-3 at that point?
Speaker 2 GPT-3 was really the early experiments. And then we got this model that then eventually became Codex, which was this code-specific
Speaker 2 version of the model. And today, that
Speaker 2 no longer really exists, right? Like today, everybody sits on top of one of these more powerful base models.
Speaker 3 Yeah, yeah. And that's kind of my point is to some extent,
Speaker 3 the generalizability started to take over. And so I'm just a little bit curious how you think about generalizability versus specialization and a five-year time horizon for agents.
Speaker 2 I can see that happening at the model layer,
Speaker 2 but it's getting like predicting a little bit of, you know, when do we truly have self-driving cars?
Speaker 2 You know, I had a tester for 10 years with self-driving and autopilot in one form or another, and it still cannot make the left turn into my neighborhood.
Speaker 2 I can see that future happening, but I don't know when that is, and when the models are basically just all
Speaker 2 about equal.
Speaker 2 But I think for software developers,
Speaker 2 the lowest level only matters until there's differentiation at the higher layer of the stack, right?
Speaker 2 Like, I think programming language or open source libraries are great examples for that, because if you zoom out enough, they're all the same.
Speaker 2 At the end of the day, whether you're building an app with Swift or Kotlin or React Native,
Speaker 2 what does it matter? And that's just the intricacies of software development and the belief system that we have.
Speaker 2 And so I think the differentiation is going to come from
Speaker 2 both where the developer gets
Speaker 2 the best experience and doing their day-to-day.
Speaker 2 Where can I start my morning, pick up something I want to work on, explore my creativity and get the job done with the least amount of frustration and the highest amount of ROI in terms of what can I ship.
Speaker 2 And like software development, you know, over the last 30 years has always, or actually last 50 years, if you go back, you know, all the way to the 1970s when microcomputers came and all of a sudden no longer had to share mainframe with others, was always about how can I take all my grand ideas that are way bigger than what I can actually achieve as an individual.
Speaker 2
How can I, you know, get that done faster. I don't think we are at the at the top of that exponential curve.
I think there's still a lot to come.
Speaker 2 The other question you could ask is, when do a CEO of GitHub get to the point where my backlog is empty?
Speaker 2 And I just don't believe that that point is ever coming.
Speaker 3 Yeah, there's a super related, interesting question to what you're saying, which is, for how long are humans making decisions on what agents to use?
Speaker 3 Because if you look at it, there are certain roles, a lot of the ones that you mentioned, developers, designers, et cetera, that have traditionally tended to be a little bit trend-based.
Speaker 3 You know, it's almost memetic what certain developers will use sometimes. And obviously there's like the dramatically superior products, and there's sort of clear choices around certain tooling.
Speaker 3
And sometimes it just feels like it's kind of cool. And so people are using it.
Same with programming languages, right?
Speaker 3 So it's almost an interesting question when the human component of decision-making goes out the window.
Speaker 3 Are the decisions that are made radically different? Because you're getting rid of trendiness.
Speaker 3 You're not going to use Go, you're just going to use Python or whatever.
Speaker 2 If I look at my team, how often as a CEO do I have to check in with them to see if what they're building is actually what I thought
Speaker 2 I want them to build when I gave them the task, right? So it's number one, the human that takes over a task and feature an epic, whatever,
Speaker 2 still has a loop
Speaker 2 with other team members to kind of like ensure what they're building is actually the right thing.
Speaker 2 I don't see a world where we can be specific enough when we give the agent work
Speaker 2 that it can just do it all by themselves, unless the unit size is very, very small. The other side of that question, I think, is when do we get to the point where all software is personal software?
Speaker 2 And in fact, I no longer install an app from the App Store. I just
Speaker 2 use a natural language interface to build all the apps myself. And so I have basically completely personal software on my personal computer,
Speaker 2 my smartphone, instead of off-the-shelf software that is the same for all ourselves, where the user interface effectively is completely personalized. And
Speaker 2 we have science fiction movies or action movies like Iron Man, right? Like where Java is completely personalized to Tony Stark. And so I think that future will happen in the next five years for sure.
Speaker 2 It's just the question is how good is Java going to be? And can I just tell it Springbank is coming up, same hotel, same family, you know, say, and it books me the trip.
Speaker 2 And the only question I have to confirm is: do I do the $5,000 trip?
Speaker 3 One other thing that's been striking about GitHub and Copilot and everything else is the actual business success of all of it, right?
Speaker 3 And I think it's been quite striking on the earnings calls more recently that have been done.
Speaker 3 What can you share in terms of business and financial metrics and the impact that Copilot and GitHub more generally are having for Microsoft?
Speaker 2 Not a lot beyond what's in the earnings call.
Speaker 2 I'm trying to remember. I think the last number we shared was
Speaker 2 a few quarters ago, 77,000 organizations using Copilot. And back then,
Speaker 2
the number of paid users was 1.8 million paid users. We haven't shared an updated number since.
I can share that latest number.
Speaker 2 But I think what's really interesting from these earnings calls, if you look at the number of logos that Satya has called out, it's across the whole spectrum of industries. It's not just
Speaker 2 cool startups.
Speaker 2 It's not just financial services institution. It's really every industry that has adopted Copilot.
Speaker 2 And I don't think there has been a developer tool that has been adopted with such a velocity across the whole spectrum of of software development in any company size and in any industry.
Speaker 2 You know, if you think about it, $20
Speaker 2 compared to the salary of an average software developer in the United States is like, what, 0.1%,
Speaker 2 if at all.
Speaker 2 And then we're talking about
Speaker 2 25, 28% productivity gains on the end-to-end, 55% or higher on the coding task. But as we said earlier, developers do more than just coding.
Speaker 2 That's an incredible ROI on the dollar spent. And I think that's what is driving this adoption curve.
Speaker 2 And then any company is now a software company, and they all have the same problem described earlier. They have long backlogs and way too much work.
Speaker 2 And every time, you know, one of the managers goes to their team and asks them how long does it take to implement a feature, it becomes the Jim Kirk Scotty joke that
Speaker 2 how long does it take to repair the warp drive? And you get an estimate that's outrageously long.
Speaker 2 And then it becomes a negotiation where the captain sets the deadline instead of the engineer actually estimating what's possible.
Speaker 2 And I think that's where a lot of the business success of Copa is coming from.
Speaker 2 All the people writing software are frustrated how long it takes, not because they don't think their engineers are good, but just the complexity of building software.
Speaker 3 How much do you think this pricing changes? And I know it's just speculation at this point, when you're actually replacing people.
Speaker 3 And I know in a lot of industries, it could be legal, it could be accounting, it could be coding.
Speaker 3 People say, well, eventually this will shift to value-based pricing because eventually, instead of just paying 20 bucks a month to make a person more productive, you're actually replacing a person who costs 50 or 100 or 200,000 a year, whatever it is, depending on what their role is, probably just in different disciplines.
Speaker 3 So I'm just sort of curious how you think about, is this eventually a rent a programmer and it's priced like a programmer?
Speaker 3 Does it all get commoditized and eventually something that would normally cost $100,000, $200,000, $300,000 a year costs you $1,000 a year? Like, how do you think about where this market goes?
Speaker 2 I think it's going to be compute-based or some unit that's a derivative of compute
Speaker 2 as a metric. So it's going to be cheap.
Speaker 2 It's going to be cheap in the same way that your dishwasher and your kitchen is not a derivative of what a person would cost you when doing your dishes every single day.
Speaker 2 But I think the buyer persona is not going to be willing to pay for a machine, you know, whether it's a dishwasher or an agent, a price that's an equivalent of human developer.
Speaker 2 And I think that's actually the correct mindset because I don't believe that the AI agent is actually replacing the developer.
Speaker 2 The creative part is still coming from the software developer, the systems thinking. Predicting the future always has the fun part of that.
Speaker 2 I'm coming back on the podcast in a year or two, and you're telling me how wrong I was about my predictions.
Speaker 2 But I think there's a lot of decisions that are made in software development that a human has to make.
Speaker 2 What database, you know, what cloud, a lot of that are a function of the business and how it operates. You know, which cloud you're using is not necessarily a question of how much the cloud costs.
Speaker 2 It's a question, it is a strategic decision of the CTO or the leadership, engineering leadership team.
Speaker 2 And more and more, we see companies using more than one cloud because they don't want to have a dependency on just one single supplier in the same way that
Speaker 2 any random car manufacturer has multiple suppliers for airbags because they don't want to be stuck with the factory line when airbags are not deliverable from that one supplier.
Speaker 2 And so I think the agents,
Speaker 2 the price points will certainly go up as these agents become more powerful.
Speaker 2 We see that with OpenAI, where the highest tier now costs $200 for deep research and the 01 Pro model. And
Speaker 2 people see the
Speaker 2 value in that. And
Speaker 2 I think two years ago, if we had predicted that,
Speaker 2 we wouldn't have believed it. You're willing to pay $200 a month for a chat agent because the flip side of that often is in software that people
Speaker 2 feel like a $5 subscription for a mobile app is a lot of money.
Speaker 2 And you can just see that when you look into the reviews of apps that move from a one-time payment to a subscription model of how many people
Speaker 2 don't like that model because they feel like software is something that you buy once, like a CD, and then you own it.
Speaker 2 Definitely, we're going to price increases that will be based on the value that you're getting out of it. Because you know, the other side of that is that
Speaker 2 human developers are expensive because there's limited supply.
Speaker 2 Agents will have infinite supply that will only be limited by the amount of compute capacity GPUs available in data centers.
Speaker 1 Speaking of that unlock of supply, like we've been talking about
Speaker 1
what is the pricing of the code generation. I think there's also a question of just like what happens to the value of software at all.
Like everybody's been talking about Javon's paradox for a while.
Speaker 1
I don't want to ask about that, but maybe something more specific. You're from East Germany.
You remember the Trabant car?
Speaker 2
I do. I had one.
Oh, well, my parents had one.
Speaker 1 Oh, okay. Right.
Speaker 1 So you can tell me what it was actually like, but good for you guys because it was this, it was the okay car, but it was the default car that ended up having this like 10-year waiting list because of the supply constraint with the rest of the world.
Speaker 1
And then as soon as the wall came down, you, you know, the demand completely collapses. Yeah.
Right. Because you have access to the world of cars and pricing at least did.
Speaker 1 I guess one question I'd have for you is I'm generally such an optimist about like. the demand for software being very elastic, but I think of that as volume and quality and variation.
Speaker 1 Are there types types of software that you think collapses in value when AI takes away like some of the scarcity of engineering?
Speaker 2 You know, the Trabant, the Bakers was actually, I think, 17 years in the late 80s. Okay.
Speaker 1 17, not 10. Yeah.
Speaker 2 That road, by the way, still exists.
Speaker 2 Today it exists in supercars, right? Like often you can buy a supercar like, you know, the top end Porsche 911 R straight or whatever.
Speaker 2 And then the resale price is higher than the new price because you can't get one to go to a dealer because at the dealer, you have to buy like 100 Porsches first before you get a slot for that exclusive top of the line Porsche or Ferrari is the same thing.
Speaker 2 And so the Traband actually, the one that my dad owned, he sold, I think in 84, 85 to a neighbor at a higher price than we bought it because you could shortcut the 17-year wait to get a car.
Speaker 2 And often parents had a subscription, quote-unquote, subscription, like signed up their kids already for a car when the kids were still young. So you could actually get one by the time you reached
Speaker 2 adulthood and could do a driver's license. And so
Speaker 2 I think we're going to see, coming to your software question, right? Like we're going to see it going both ways, right?
Speaker 2 Like if you think about Copilot, Copilot costs for businesses $20 per user per month. That's actually almost exactly the same price as you pay for GitHub Enterprise, which is $21 per user per month.
Speaker 2 And so for storing all your repositories, managing all your issues, your whole software development lifecycle was $21 per user per month. And many used to perceive that as
Speaker 2 a lot of money for DevOps.
Speaker 2 And then we came with co-pilot auto-completion, and that was $20 a month. And so, all of a sudden, that sub-feature of the software development lifecycle, auto-completion, costs $20.
Speaker 2 And that goes back to Elad's question, right?
Speaker 2 Like, if there's the value where you get the ROI and you get 25% productivity increases, yeah, I mean, you're willing to pay more for something that, you know, probably five years ago, if I told you auto-completion is going to be that standalone feature at Rinbay AI that costs
Speaker 2 more than the average selling price for all of GitHub, you would have said, well, that sounds unlikely. I think we're going to see deflation of software prices.
Speaker 2 And so I think it's a mix of both. Some things we won't pay for it anymore.
Speaker 2 Nobody pays for the operating system
Speaker 2 anymore. And then at the same time, you pay way more than ever for your Netflix subscription and for your Office subscription and all those kind of things.
Speaker 2 So I think both of these things will be tool at the same time.
Speaker 2 And it's all about how much value do you get for your business paying for that solution, whether it's doing it yourself or using something that you manage yourself or install on your own server.
Speaker 1 GitHub is foundational infrastructure for open source. So I'm sure you have like general opinions about what's happening in the open source ecosystem.
Speaker 1 Today, you can use Claude and OpenAI in Copilot and Gemini, but not necessarily open source models right now.
Speaker 2 Correct. So in in Copilot, we have Claude, Gemini, and then OpenAI.
Speaker 2 And OpenAI has different models that I was just processing it in my head: wait, there's more than three models, but it's the 4.0 model, the 0.1, and the 03 mini model.
Speaker 2 In GitHub models, which is our model catalog, we have open source or open-base model like LLAMA
Speaker 2 as an example, and then all kinds of other models like Mistral, Cohia, Microsoft's 5.4 model. And the model catalog, while it's a separate feature within GitHub,
Speaker 2 you can add model add models in Copilot because Copilot has extensions and you can actually reach from Copilot into the model catalog.
Speaker 2 And so if you want to just ban quickly inference against Fi4, you can do that by using the add models extension in Copilot.
Speaker 2 So that way we have more models than just the ones that are packaged into Copilot. I didn't know that.
Speaker 1 What do you think is the relevance of open source versus the proprietary model APIs for developers in the future?
Speaker 2 The biggest thing I think is that open source is going to drive innovation. And we saw that with DeepSeek earlier this year.
Speaker 2 Or actually, a couple of weeks ago, it's not that long ago, even.
Speaker 2 Long year, yeah.
Speaker 2 It feels like already half a year has been passed instead of just a month and a half.
Speaker 2 But I think open source is going to drive innovation.
Speaker 2 We saw that with image models like Stable Diffusion. And now there's the Flux model from a startup actually not too far from my home base in Germany and the Black Forest in Freiburg.
Speaker 2 Black Forest Labs is actually the company behind Flux. And so we're going to see innovation, I think, on open source models that drive the other vendors.
Speaker 2 And this back and forth between the open source ecosystem and the proprietary, closed source companies will, I think, accelerate the whole space.
Speaker 2 DeepSeek is the most prominent example right now where you can look into this. The paper is open.
Speaker 2 The models are open. Some of them are
Speaker 2 fully open source under the MIT license. Others are
Speaker 2 open weights. And so you can look at the weights and then the code to run it is open source, but the weights itself are under somewhat proprietary license and
Speaker 2 governed by Chinese law and whatnot. And I think that is going to drive innovation and it's going to open up that space and it democratizes access.
Speaker 2 Because if you just want to play with a model, you don't have to run
Speaker 2
inference against the commercial API. You can just try it out yourself on your local machine and play with this.
And if you think about kids and students and research, that opens up a huge space.
Speaker 2 And that's ultimately
Speaker 2 what has always been part of our DNA at GitHub. Was that a satisfying answer, Sarah?
Speaker 1 Yeah,
Speaker 1 I think the most satisfying answer is like somebody wins, right? But I think that's a very hard thing to predict right now.
Speaker 2 What has one
Speaker 2 iPhone or Android,
Speaker 2 Windows or Linux or macOS for that matter?
Speaker 2 I think we like to think about these
Speaker 2 binary battles in the tech industry. And the reality is that's not actually how that works and certainly not in the developer space.
Speaker 2 React hasn't won.
Speaker 2 And there's always going to be the next thing. And before React, there was a jQuery or whatever
Speaker 2
library you preferred. I think there's going to be a next programming language after Python and TypeScript and Rust.
And Rust in itself
Speaker 2 wasn't really a thing five years ago.
Speaker 2 And so there's going to be more languages that are probably closer to human language to be more specific about the natural language layer and AI and the programming language layer that converts down to the CPU or GPU.
Speaker 2
And so I think there's no winning. There's always just the, you're playing the infinite game.
It's like Minecraft.
Speaker 2 Software is like Minecraft and there's no winning in Minecraft. You can win little battles and they're isolated to a certain sub-challenge or whatever quests.
Speaker 2 But ultimately, we're building a bigger and bigger world of software. And there's always going to be the next big thing.
Speaker 1 That's a funny analogy. If I think about any individual developer, like there's something people have been saying to me, developers of a particular ilk, right?
Speaker 1 Really strong technical people who are more experienced, not all of them, but like more experienced gremlin systems developers, often people very attached to Rust.
Speaker 1 And they'll say basically, like, they're worried about the next generation of developers building the taste and understanding of architectural choices and the trade-offs and corner cases of how a particular implementation can fail given some shape of data, given their experiences of the actual implementation, right?
Speaker 1 And so they're worried, you know, obviously the right thing to do for anybody who wants to win that next level of Minecraft in 2025 is like use AI aggressively, learn to use it.
Speaker 1 But like, does that concern from this segment of, I'm sure you've heard it, does that concern like resonate with you at all?
Speaker 1 Like, can you foster the requisite depth of understanding of engineering at an abstract level when we're not writing the code? Or is it like a silly, silly concern?
Speaker 2 I wouldn't call it silly because obviously, you know, there's some truth to that, right? It's easy, you know, to cheat at a programming exercise or advent of code and those kind of things.
Speaker 2 As these AI models get better, these competitions of who's the best hacker or coder are going to have to move to a whole different level where you assume that the developer is using AI to solve the challenges because otherwise
Speaker 2 it's going to be way too easy.
Speaker 2 If you think about the next generation of developers, you you know, maybe not 2025, but 2035, like, look, you know, and you mentioned, you know, me growing up in East Germany, and then the wall fell, and I bought a Commodore 64, but they had no internet.
Speaker 2 And so I bought books and magazines, and that was it, right? Like, there was no forum I could go to and ask questions. I could go to, I went to a computer club
Speaker 2 every Wednesday or so until nobody there had anything to say anymore that I didn't know already, right?
Speaker 2 If you take that and compare to today, the kids of today and those that want to learn coding have an infinite amount of knowledge available to them.
Speaker 2
And you know what? Also, an infinite amount of patience. Because copilot doesn't run out of patience, parents do.
I am one.
Speaker 2 And so it's incredibly democratizing to have AI available if you want to learn coding. Your parents don't have to have any technical background.
Speaker 2 All you really need is an internet connection on your mobile phone
Speaker 2 and one of these co-pilots or chat GPTs or whatever you prefer, and you can start asking coding questions and you can ask about Boolean logic and about systems thinking and you can go infinitely deep on any of those questions and
Speaker 2 traverse to other topics as you like, right? And so I think we're going to see a new generation of
Speaker 2 humans that grow up with the technology. And for them, it's just natural to leverage their personal assistant, their personal set of agents.
Speaker 2
I recently call it the orchestra of agents. And you're the conductor of that orchestra of agents.
And they know how to do that.
Speaker 2 And so they can achieve in the same amount of time so much more than we could
Speaker 2 in the last 30 years. And I think that's incredibly exciting because, again,
Speaker 2 find me a developer that doesn't have this big idea of that computer game or software system or feature that they always wanted to build and don't have the time for.
Speaker 2 Like my engineers talk much more about being overcommitted and burned out and not having enough time for all the things I'm asking for, and the customer is asking for, and the security team is asking for.
Speaker 2 And so I think that's just where we're heading and how this is going to be super exciting, both actually in open source as well, right?
Speaker 2 Because open source sustainability is another big topic that we probably spent another hour on and in any kind of software that people want to build.
Speaker 1 I definitely agree with that
Speaker 1 excitement and optimism. I think about my three kids and
Speaker 1 like what they would be able to learn at what pace, um, with the you know, the AI resources that people will have. And I'm incredibly jealous.
Speaker 1 I'm like, I could be much better as an engineer, so much faster with, as you said, the infinite patience and understanding of today's models.
Speaker 1 By the way, I was very lucky, my parents are both engineers, right? But you know, it's a very human dynamic where I'd ask a question, and my dad would be like, It's logic, Sarah.
Speaker 2 I'm like, oh no.
Speaker 1 Can I ask you, you know, maybe a more personal question to close? Like East Berlin, you know, you have this unique experience of this really rapid technological change after reunification.
Speaker 1 Do you think that informs at all how you think about like the speed of the current AI transition and how like users and human beings will react to it?
Speaker 2 I always wanted to believe that a lot of my life has been, you know,
Speaker 2 defined by that one moment of change in 1989.
Speaker 2 And, you know, I remember, you know, the night when the wall fell or when it was announced that wall would be opened, and it was a Thursday night, and then Friday was normal school.
Speaker 2
Saturday was still school as well, a half day in school. And I think I was one of four kids that showed up in my class, and then they sent us home.
And
Speaker 2 we actually crossed over to West Berlin. And I think the thing that is important for that generation of kids that lived through that change is that they can no longer return to their childhood.
Speaker 2 You know, home is gone. Like, you know, there isn't like that store in the corner that's the same as it was like 40 years ago.
Speaker 2 And the schools are gone, the system is gone, the traditions, all that resolved into that new world. And so it's a bit like when you're moving from one country to another, which then I
Speaker 2 did 10 years ago as well to move when Microsoft bought my company. Once you have done that step in your life,
Speaker 2
you gained a whole new perspective on things. And I think that's unification in 1990.
And then
Speaker 2 through the steps of my life, including to become the GitHub CEO through random decisions, or at the time they felt random. This is how I got here, and this is how I look forward.
Speaker 2 And I'm optimistic about the future
Speaker 2 while recognizing my past and taking some of those experiences when I talk with you guys and
Speaker 2 reflect on what it was like in the
Speaker 2 90s to program on the Commodore 64 before and after the internet, right? Before and after open source, before and after the cloud, before and after mobile. And now we have before and after AI.
Speaker 2 And there's no looking back.
Speaker 2 The future will be that we have AI for almost everything we do in our lives.
Speaker 2 If we want to, you know, you can still always throw your cell phone into the corner and enjoy your day without the internet.
Speaker 1 This has been great, Thomas. Thanks so much for having the conversation.
Speaker 2 Thank you so much for having me.
Speaker 3 It was good to connect, sir. I appreciate the time and everything else.
Speaker 1
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 1 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-buyers.com.