Google's Gemini 3 Is Here: A Special Early Look
Press play and read along
Transcript
Speaker 1 AI companies have unique business models, each with distinct billing needs. Stripe is the go-to choice for AI leaders, from early-stage startups to scaled enterprises.
Speaker 1 With Stripe billing, you can support any business model and easily align your monetization strategy with customer value.
Speaker 1 Join the ranks of 78% of the Forbes AI50 and millions of businesses worldwide that trust Stripe to help them build more profitable, scalable businesses. Discover more at stripe.com.
Speaker 3 Casey, we have a special emergency podcast episode today about the launch of Gemini 3.
Speaker 2 Yes, Kevin, hotly awaited, much discussed among AI nerds here in Silicon Valley. We are finally about to get our hands on the genuine article.
Speaker 3 Yeah, so normally we wouldn't break our Friday publication schedule to publish a special episode just about our new model coming out from one of the big AI companies.
Speaker 3 They're releasing models all the time. But there are a couple reasons that we thought it was worth doing this this week to talk about this model, Gemini 3, in particular.
Speaker 3 The first is that we got some time with Demis Isabas and Josh Woodward, two of the leading AI executives at Google. Demis, of course, is the CEO of Google DeepMind, which is their in-house AI lab.
Speaker 3 And Josh Woodward is the VP of the Gemini team and some other stuff there at Google. So we were excited to talk to them and ask them about this big new model release.
Speaker 3 But I think there are a couple of other reasons we were interested in doing this as well.
Speaker 2 Yeah, I mean, one big thing, Kevin, is just that maybe more than other model releases, this one seems to have the attention of Google's competitors.
Speaker 2 We're hearing a lot of whispers from folks who work at other AI labs that, hmm, it seems like Gemini 3 has managed to figure some things out in a way that may be bad for their businesses.
Speaker 3 And I think around the AI industry, there's sort of this feeling that Google, which kind of struggled in AI for a couple of years there, they had the launch of Bard and the first versions of Gemini, which had some issues.
Speaker 3 And I think they were seen as sort of catching up to the state of the art.
Speaker 3 And now i think the question is like is this kind of them taking their crown back um so we'll get into all that with demis and josh but let's just talk casey about what we know about gemini 3 they held a briefing early this week and told us a little bit about the the new model and what it can do so what did we learn about gemini 3 Yeah, well, so in terms of what it can do, which is always the most interesting to me, Google shared a few different things.
Speaker 2 One, in addition to saying all the things you would expect, like it's better at coding and it's better at vibe coding.
Speaker 2 It also is going to do some new things around generating interfaces for you when you ask it a question. So nowadays, you ask most chatbots a question, it'll spit back an answer in text.
Speaker 2 Maybe it shows you an image. According to the Google folks, Gemini 3 is just going to start building custom interfaces for you.
Speaker 2 So they showed an example where somebody wanted to learn about Vincent Van Gogh, the painter, and Gemini 3 just sort of like coded up an interactive tutorial that had all sorts of like images and interactive elements.
Speaker 2 They showed another example that involved building a mortgage calculator for buying a home over a million dollars, which is the lowest amount of money that anyone at Google can imagine spending on a home.
Speaker 2 So these are the kinds of things that you can expect to find in Gemini 3, Kevin.
Speaker 3 Yeah, so I would say the theme of the briefing and of the materials that Google shared ahead of the Gemini 3 launch was this is just kind of better than their last model, Gemini 2.5 Pro, in basically all respects.
Speaker 3 Some of the benchmarks that caught my attention, one was this benchmark test called Humanities Last Exam, which is sort of a very hard, interdisciplinary exam that consists of a bunch of questions, like basically a graduate student or PhD level.
Speaker 3
And their previous model, Gemini 2.5 Pro, got about a 21.6% on that test. And Gemini 3 Pro gets a 37.5% on that test.
That's basically the story of all of these benchmarks.
Speaker 3 They gave more than a dozen examples of various benchmarks where the new model just beats the old one handily.
Speaker 3 And, you know, to a lot of people, I think that may not matter.
Speaker 3 Most people who are using Google's AI products are probably not out there trying to solve like novel problems in physics, but their basic pitch for this is just like, this is a state-of-the-art model.
Speaker 3 Anything that you could do with ChatGPT or Claude or even the older versions of Gemini, you can do better with Gemini 3 Pro.
Speaker 2 They also talked about testing what they're calling the Gemini agent, which is going to be able to do one thing in particular that I've been been waiting for somebody to do forever, which is look through your inbox, understand its contents, propose replies, kind of, you know, organize like emails together and really sort of help you get your inbox under control in a way that I personally have never been able to.
Speaker 2 So we basically only saw a few animated GIFs about that, but that will definitely be one of the first things that I try when I get my hands on Gemini 3.
Speaker 3 Yeah, and they are not, we should say, rolling this out to everyone right away.
Speaker 3 It's going to be available this week for users in the Gemini app and also in the AI mode, which is sort of the tab off to the side of the main Google search engine.
Speaker 3 It will also be available for developers in various products, but they are not sort of saying when this will come to things like the Gemini integrations in Google Docs or Gmail, these very popular things that, you know, are used by billions of people a day.
Speaker 3 But I thought it was interesting that they have brought this model to Google search, albeit in this AI mode that's not sort of the main search bar.
Speaker 3 That to me suggests that they feel like they can serve this model cheaply enough to make it potentially something that billions of people could use and that that would not melt their servers and incur billions of dollars of costs.
Speaker 2 Yeah, so far they say that the usage keeps going up for AI overviews and every quarter they continue to make more money. So it seems to be working out for them.
Speaker 2 Not working out for the rest of the web, but it's working out well for Google.
Speaker 3 Yeah, but I think that's like obviously Google's big advantage here over their competitors is that, you know, they have products that are used by billions of people a day and they can kind of shove Gemini 3 into those products over time and just get more and more usage and get more data and use that to improve their models.
Speaker 2 Yeah, which is why we always tell students when they ask us for advice, step one, build an illegal monopoly.
Speaker 3 And speaking of students, the other notable announcement that Google is making this week is that they are giving all U.S.
Speaker 3 college students a year of free access to a paid version of Gemini, which is, I think, a smart move.
Speaker 3 I feel a little gross about it, like essentially telling students, hey, why don't you use this to maybe do some of your homework, maybe help you with your exams? We'll give you the first hit for free.
Speaker 2 Yeah, you know, I was also struck during the briefing that we had this morning that I believe three different people used the phrase learn anything.
Speaker 2 This seems like it has become a very prominent plank of Google's messaging is they're presenting Gemini as a learning tool, which I maybe is just sort of a euphemism for a do your homework tool.
Speaker 2 I don't know.
Speaker 3
Yes. Okay, so that is what we know about Gemini 3.
We will be doing our own testing and reviewing of Gemini 3 once it is fully out on Tuesday.
Speaker 3 But for now, we wanted to just give you the basics and also bring you our interview with Demisis Abas and Josh Woodward of Google Deep Mind.
Speaker 3 And before we get to that, we should obviously make our AI disclosures. I work for the New York Times company, which is suing OpenAI and Microsoft over the training of large language models.
Speaker 2 And my boyfriend works at Anthropic.
Speaker 3 Demis and Josh, welcome to Hard Fork.
Speaker 4 Great to be here.
Speaker 5 Thank you.
Speaker 3
So two years ago, Sundar Pachai told us that Bard, rest in peace, was a souped up Civic. uh that was in a race with more powerful cars.
What kind of car is Gemini 3?
Speaker 5 That's a good one. Demis, Devised, you want to take it?
Speaker 4 Well,
Speaker 4 I hope it's a bit faster than a Honda Civic.
Speaker 4 You know, I don't really think of it in terms of cars. Maybe it's one of those cool drag races.
Speaker 3
Yeah. So people are really excited about this model.
We have been hearing from folks that have been sort of early testing it.
Speaker 3 Obviously, you guys have shown off a lot of the benchmarks, very impressive.
Speaker 3 What can Gemini do on a concrete level that previous AI models couldn't?
Speaker 5
Well, I'll jump in. Maybe a couple of things that stand out.
One, we're starting to see this model really excel on reasoning and being able to think many steps at the same time.
Speaker 5
Sometimes models in the past would lose their train of thought, lose track. This one's way better at that.
The other thing you'll see tomorrow as well is all kinds of new generative interfaces.
Speaker 5 This is our best model yet at being able to create new types of interfaces. It gives people really a custom sort of design and sort of answer to their questions.
Speaker 5 And then maybe the third thing I would say is we've put a lot of investment in coding itself.
Speaker 5 And so a lot of the coding examples, you'll see some new products coming out like Google Anti-Gravity will also kind of showcase that.
Speaker 2 There's been some discussion that for average users, the chat use case can feel solved, that sort of average users of products like Gemini kind of almost can't even think of a question to ask it that will generate something something that feels meaningfully different from what they were able to get in the last model.
Speaker 2 To what extent does that feel true to you in Gemini 3? And to what extent do you think average folks are really going to notice a difference?
Speaker 5 Yeah, one of the things, I guess, we're seeing in some of the testing and Demos, feel free to chime in too, is I think these are really...
Speaker 5 For us, this is a model that it's more concise, it's more expressive, it starts to present information in a way that's much easier to understand.
Speaker 5 And I think for most people, that's going to be a big immediate effect. And then I think what starts to get interesting is how these models start to interact with other types of information.
Speaker 5 So we talk a lot about how students are going to be able to learn with this model, or even how this model can connect to other types of data you might have in other Google products with your permission.
Speaker 5 These are the ways I think we're starting to show kind of it's going beyond just the standard text kind of QA back and forth.
Speaker 4 Yeah, I think I'd add to that, just like, you know, its general reliability on things is incredibly, you know, you'll notice that when you use it.
Speaker 4
I think also we work quite hard on the persona, which we call it internally, like the style of it. I think it's more succinct.
I think it's more to the point. It's helpful.
Speaker 4 I feel like it's got a better style about it. I find it more pleasant to brainstorm with and use.
Speaker 4 And then I think, you know, I think there are various things where there's almost a step change. I feel like it's crossed a sort of threshold of usefulness on things like vibe coding.
Speaker 4 I've been getting back into into my games programming.
Speaker 4 I'm going to set myself some projects over Christmas on that because I feel like it's actually got to a point where it's incredibly useful and capable on front end and things like this that perhaps previous versions weren't so good at.
Speaker 3 Dennis, the last time we had you on the show in May, you said that you think we're five to ten years away from AGI and that there might be a few significant breakthroughs needed between here and there.
Speaker 3 Has Gemini 3 and observing how good it is changed any of those timelines? Or does it incorporate any of those breakthroughs that you thought would be necessary?
Speaker 4 No,
Speaker 4 I think it's sort of dead on track
Speaker 4
if you see what I mean. We're really happy with this progress.
I think it's an absolutely amazing model
Speaker 4 and is right on track of what I was expecting and the trajectory we've been on actually for the last couple of years since the beginning of Gemini, which I think has been the fastest progress of anybody in the industry.
Speaker 4 And I think we're going to continue doing that trajectory. And we expect that to continue.
Speaker 4 But on top of that, I still think there'll be one or two more things that are required to really get the consistency across the board that you'd expect from a general intelligence.
Speaker 4 And also improvements still on reasoning, on memory, and perhaps things like world model ideas that you also know we're working on with Simmer and Genie.
Speaker 4
They will build on top of Gemini, but extend it in various ways. And I think some of those ideas are going to be required as well to fully solve physical intelligence and things like that.
So
Speaker 4
both are true. I'm really happy with the progress of Gemini 3.
I think people are going to be pretty, pretty pleasantly surprised,
Speaker 4 but it's on track of what we were expecting the progress to be. And I think that means still five to 10 years with one or two more perhaps breakthroughs required.
Speaker 2 You mentioned Gemini 3's style. There's been a lot of discussion recently about AI companions, the relationships people are developing with them.
Speaker 2 How do you think about Gemini 3's personality and what kind of relationship do you want users to have with it?
Speaker 5 I would say in the app itself,
Speaker 5 we see it on the team a lot as almost like a tool or it's something you're using to kind of work through and kind of cut through your day.
Speaker 5 And so whether it's kind of if it's helping on different types of questions you have or helping you create things, that's really where we see it really kind of excelling and kind of the direction we want to see it.
Speaker 5 I think if you zoom out, if you look at Gemini or some of our other projects like Notebook LM or Flow, we're really kind of trying to think through how does AI really be this superpower, kind of super tool in your toolbox that you can use, whether it's for writing or researching or creating films or whatnot.
Speaker 5 And so that's really more where we're focused.
Speaker 5 I think over time, we're really interested on the team to be able to track things like how many tasks did we help you complete in your day?
Speaker 5 That's the new type of metric that I think we get excited about and sort of a way that the original sort of Google search worked.
Speaker 5 You would come to it, you would sort of try to get an answer or sent to a page and sort of move on from there.
Speaker 3 Well, that all sounds very good and responsible, but I'm wondering about all the viral engagement you're leaving on the table by not making this thing an erotic companion.
Speaker 3 Big oversight.
Speaker 5 No comment.
Speaker 3 Some of your competitors have been very nervous in the days and weeks leading up to Gemini 3. I think they've started hearing the same rumblings that we have about this model being quite good.
Speaker 3 And maybe the narrative shifting from sort of Google playing catch up in AI to now sort of being on top of the race or at least in a leadership position there.
Speaker 3 Do you feel like Google is ahead in the AI race right now?
Speaker 4 Look, it's a, as you guys know very well, it's a ferocious, you know, competitive environment, probably the most competitive there's ever been.
Speaker 4 So one can never, you know, it's almost really the only important thing is your rate of progress right from where you are. And that's what we're focusing on and we're very happy about that.
Speaker 4 I mean, I I don't really see it as a sort of like, you know, we were back in the lead or something like that. We've always pioneered the research part of this.
Speaker 4 I think it's like getting into our groove in making sure that downstream reflected in all of our products. And I think we're really getting into our stride there.
Speaker 4 I think you saw that actually last IO, I would say.
Speaker 4 And we're getting better and better at that, like with GDM being sort of the engine room of Google. And
Speaker 4 of course, there's a Gemini app, there's Notebook LM, these AI-first products, but there's also powering up all these amazing existing Google products, whether that's Maps, YouTube, Android, you know, search, of course, with AI-first
Speaker 4 features and actually in some cases, reimagining things from an AI-first perspective with, you know, often Gemini under the hood. And that's going amazingly well.
Speaker 4 And I think we're only midway through that evolution, but it's very exciting to see how
Speaker 4
much value and excitement our users are getting when they see each of those new features. And, you know, for example, workspace and Gmail and so on.
There's almost endless possibilities there.
Speaker 4 So we're really excited about that, as well as all of these AI-first products that we're also imagining and prototyping.
Speaker 2 We had a historian on the show last week who was using an unreleased Google model in AI Studio, and it had sort of blown his mind with how it was able to transcribe these very old documents and reason correctly about, you know, what kind of, you know, what was the measurements of the sugar in this sort of 1800s fur trade in Canada.
Speaker 2 Do you think you can tell us once and for all, was this man using Gemini 3?
Speaker 5 Not sure about that one.
Speaker 3 Okay.
Speaker 5 I will say the model is, though, quite amazing at making these connections. And I don't know if the historian was using kind of photos of old documents or diaries or whatnot.
Speaker 2 That's what he was doing.
Speaker 4 Yeah, that most certainly was.
Speaker 5 Okay, it's very good at this.
Speaker 5 And, you know, someone like me who has pretty poor handwriting, you can take us a page of notes and it'll kind of take that and run with it with no problem, no sweat.
Speaker 3 You mentioned that on this call that you're going to be integrating this into search in the AI mode that sort of is a side tab on the main Google search engine.
Speaker 3 Does that mean that you found a way to serve this model more efficiently and cheaply than previous models?
Speaker 4 I think we're always on the cutting.
Speaker 4 I feel like the thing we do really well, apart from the overall performance of our models and getting better and better at that, is the efficiency of our models and the distillation techniques and many, many other techniques that we sort of created and pioneered that we're now putting to use.
Speaker 4 Obviously, it's necessary for us because we have extreme use cases of things like AI overviews and others that we have to serve billions of users.
Speaker 4 And then, of course, some of our cloud customer enterprise customers really appreciate that efficiency, cost efficiency too.
Speaker 4 So, we've always tried to be on this Pareto frontier of cost to performance.
Speaker 4 And wherever you want to be on that frontier, if you value performance most or if you value cost the most, then they'll be one of the models in the model family for you.
Speaker 4 So, of course, we're only announcing Pro today, but we are
Speaker 4
also working on the other family of models for the 3.0 era. So, you'll see a lot more about that, but pretty soon.
Yeah.
Speaker 2 It seems like every time we see the release of a new Frontier model, we get to revisit the discussion about scaling laws. And are we beginning to see diminishing returns?
Speaker 2 And I can predict a few Twitter accounts that will probably have something to say about this over the next few days.
Speaker 2 So, I thought I would just sort of ask you, before we have that discourse, how are you guys thinking about that in relation to Gemini 3?
Speaker 4 Yeah, we're very happy with the progress Gemini 3 represents over 2.5. So I would say,
Speaker 4 sort of actually referencing to what we discussed earlier, that the progress is basically what we were expecting and on track, and we're really pleased with it.
Speaker 4
But that's not to say that it's like... there is some kind of diminishing returns.
People, when they hear diminishing returns, they think of, is it zero or exponential, right?
Speaker 4 But there's also in between. So there can be diminishing, it's not like going to like exponentially double with every era, but it's not,
Speaker 4 it's, but it's still well worth doing, right? And
Speaker 4 extremely good return on that investment. So I think we're in that era.
Speaker 4 And then, you know, as I said, my suspicion is, although we'll see, is that still one or two more breakthroughs are required, research breakthroughs are required to get all the way to AGI.
Speaker 4 But in the meantime, you're going to obviously need as scaled a possible versions of these foundation models, multimodal foundation models that we're building today and still seeing great progress on.
Speaker 3 Right.
Speaker 3 Which of the many benchmarks that you showed off today do you feel like is going to matter most to the average user?
Speaker 5 Oh, that's a good question. I think most people don't look at the benchmarks as closely as we do, but the benchmarks are always a proxy, right?
Speaker 5 So you look at something like cracking the 1500 ELO on LM Arena,
Speaker 5
that's great. But what really matters is kind of the user satisfaction in the the products too.
And I think what's been encouraging to us is these are still moving in the same direction.
Speaker 5 They're good proxies for each other. And so ultimately, I think we'll put out all the benchmarks and we're very proud of them and they represent amazing progress.
Speaker 5 But you also have to be able to translate that into product experiences that matter. And so we try to do both with every one of these releases.
Speaker 3 Any new dangerous capabilities or safety concerns that come with the increased power of the model?
Speaker 4 I think, well, we've taken quite a long time on this model to, because it's frontier and,
Speaker 4 you know, has some new capabilities and it's very capable, as you can see from the benchmarks. And
Speaker 4
as Josh said, we don't, you know, make, we make Sean to not over-index internally on those benchmarks. They're just a proxy for overall performance.
And that's why we care about them across the board.
Speaker 4 And then ultimately, how our users experience them. But we spend a lot of time
Speaker 4 on testing, safety testing, all the different dimensions with the safety institutes and also external testers that we work with as well, as well as, of course, doing a ton of internal testing.
Speaker 4 So I would say this is our most thoroughly tested model so far.
Speaker 2 Do you want to mention any of those sort of new capabilities that popped up, whether or not it was for a safety thing?
Speaker 2 Was there something in there where you thought, okay, yeah, we definitely need to make sure we're sending this to a bunch of external researchers?
Speaker 4 Yeah, well, look, it's just making sure we've worked really hard on things like tool call usage and function calling and these kinds of things.
Speaker 4 Obviously, they're super important for coding capabilities, and developers want that, and so on. And it's very important in general for reasoning, but it also makes them more capable
Speaker 4 for riskier things too, like cyber.
Speaker 4 So, we have to be, you know, we have to be sort of doubly cautious as we improve those dimensions for all the good use cases that we're continually checking on all those kinds of measures that
Speaker 4 they can't be misused.
Speaker 3 Are we in an AI bubble?
Speaker 3 I think
Speaker 4 it's too binary a question, I would say. I think, I mean, my view on this is just strictly my own opinion, is that there are some parts of the AI industry that are probably in a bubble.
Speaker 4 You know, if you look at like seed investment rounds being multi-$10 billion rounds with basically nothing, it seems,
Speaker 4 I mean, there's talented teams, but it seems like
Speaker 4 that might be the first signs of some kind of bubble.
Speaker 4 On the other hand, you know, I think there's a lot of amazing work and and value to, at least from our perspective, that we see that not only are there all the new product areas, so Gemini app, notebook LM, but thinking more forward, robotics, gaming.
Speaker 4 I mean, there's incredible uses of, and not just Gemini, but some of our other models, Genie, you can imagine my old games paying background.
Speaker 4
You know, I'm itching to think about what could be done there. And drug discovery, we're doing with isomorphic and Waymo.
And so there's all these new greenfield areas.
Speaker 4 They're going to take a while to mature into massive multi-hundred billion dollar businesses.
Speaker 4 But I think that there's actually potential for half a dozen to a dozen there that I think Alphabet will be involved with, which I'm really excited about.
Speaker 4 But also immediate returns, we got, of course, the engine room, you know, this is the engine room part of Google, where we're pushing this into all of these incredible, you know, multi-billion user products that people use every day.
Speaker 4
And there's, there's just almost, we have so many ideas. It's just about execution.
Like, how would you reorganize workspace around that? Android, YouTube, there's just so much potential there.
Speaker 4 And I think a lot of that will also bring in
Speaker 4 near-term revenue and direct returns while we're also investing in the future, not to speak of, you know, cloud revenue and TPUs and all of that,
Speaker 4
which I think is also going to be huge. So I feel really good about where we are as Alphabet, whether or not there's a bubble or not.
I think our job is to be winning in both cases, right?
Speaker 4 If there's no bubble and things carry on, then we're going to take advantage of that opportunity.
Speaker 4 But if there is some sort of bubble and there's a retrenchment, I think we'll also be best placed to take advantage of that scenario as well.
Speaker 2 All right, let's imagine it's Thanksgiving coming up and it's the Bay Area and one of our listeners, you know, changes the subject from politics, which is upsetting everyone, to AI, give people something to be excited about.
Speaker 2 And someone said, hey, I heard Gemini 3 just came out. Like, what can it actually do?
Speaker 2 What's the example that you would have our listeners show their friends, whether it's on their phone and their laptop, to be get a load of this and save Thanksgiving?
Speaker 5 Yeah, I don't know if it'll save Thanksgiving, but it could probably provide some laughs. You know, our imagery models in Gemini are still best in the world.
Speaker 5 So what we would, I would say, grab your phone, can be, you know, iPhone, Android, doesn't matter, pull it out. You can take a selfie, put yourself in it and edit it.
Speaker 5
People are still doing that at huge amounts and it's great fun. And then I think you can then show off any kind of other capabilities in the new Gemini 3 alongside it.
But this is what we're seeing.
Speaker 5 People kind of coming for a lot of these interesting use cases and then starting to try other parts of the app, too.
Speaker 3 You heard it here. Nano Banana will save Thanksgiving dinner.
Speaker 3
Gentlemen, thank you. It's great to talk.
And thanks for making the time. Appreciate it.
Thanks for having me. Thank you all.
Thanks, guys.
Speaker 1 1.3%.
Speaker 1 It's a small number, but in the right context, it's a powerful one. Stripe processed just over $1.4 trillion last year.
Speaker 1 that figure works out to about 1.3 percent of global gdp and powering that figure are millions of businesses finding new ways to grow on stripe like salesforce open ai and pepsi learn how to build the next era of your growth at stripe.com slash enterprise picture this you land the perfect name for your startup only to find peter from delaware owns the dot com your options pay up or settle for a domain that looks looks like a Wi-Fi password.
Speaker 6 But thanks to.tech domains, there's another solution. With.tech, you get the domain name you want that instantly says you're building tech.
Speaker 6
Tech companies worldwide use.tech domains like CES.tech and 1x.tech. Don't settle.
Visit a trusted platform like GoDaddy and get your.tech domain today.
Speaker 6 The University of Michigan was made for moments like this.
Speaker 6 When facts are questioned, when division deepens, when the role of higher education is on trial, look look to the leaders and best turning a public investment into the public good.
Speaker 6 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible. Wherever we go, progress follows.
Speaker 6 For answers, for action, for all of us, look to Michigan. See more solutions at umich.edu slash look.
Speaker 3
Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant.
Today's show is engineered by Chris Wood. Original music by Diane Wong, Rowan Nemisto, and Dan Powell.
Speaker 3 Video production by Saurya Roquet, Pat Gunther, and Chris Schott. You can watch this full episode on YouTube at youtube.com slash hardfork.
Speaker 3 Special thanks to Paula Schumann, Pui Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us as always at hardfork at nytimes.com.
Speaker 7 JP Morgan Payments helps you drive efficiency with automated payments and intelligent algorithms across 200 countries and territories. That's automation-driven automation-driven finance.
Speaker 7 That's JPMorgan Payments.
Speaker 8
JP Morgan, Internal Data 2024, Copyright 2025, J.P. Morgan Chase Company, All Rights Reserved, JP Morgan Chase Bank, and a member, FDIC.
Deposits held in non-U.S. branches are not FDIC insured.
Speaker 8
Non-deposit products are not FDIC insured. This is not a legal commitment for credit or services.
Availability varies. Eligibility determined by J.P.
Morgan Chase.
Speaker 8 Visit jpmorgan.com/slash payments disclosure for details.