Erik Bernhardsson on Creating Tools That Make AI Feel Effortless
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Bernhardsson
Show Notes:
0:00 Introduction
0:22 Erik's early interest in ML infra
1:22 Founding Modal Labs
4:17 State of GPU use today and what’s to come
7:14 Modal's end-to-end vision
9:00 Differentiating amongst competition
10:20 Cloud vs on-premise
12:35 Popular AI models
13:20 Gaps in AI infrastructure
14:55 Insights on vector databases
16:48 Training models vs off-the-shelf models
17:47 AI’s impact on coding and physics
22:14 AI's impact on music
Press play and read along
Transcript
Speaker 1 Today I'm chatting with Eric Bernhardson, founder and CEO of Modal. Modal developed a serverless cloud platform tailored for AI, machine learning, and data applications.
Speaker 1 And before that, Eric worked at Better.com and Spotify, where he led Spotify's machine learning efforts and built the recommender system. Well, Eric, thanks so much for joining me today on No Priors.
Speaker 1
Yeah, thanks. It's great to be here.
So if I remember correctly, you worked at Spotify and helped build out their ML team and recommender system. And then we're also at Better.com.
Speaker 1 What inspired you to start Moodle and what problem were you hoping to solve?
Speaker 2 Yeah, I started at Spotify a long time ago, 2008, and spent seven years there. And yeah, I built a music recommendation system.
Speaker 2
And back then, there was like nothing really in terms of data infrastructure. Hadoop was like the most modern thing.
And so I spent a lot of time building a lot of infrastructure.
Speaker 2 In particular, I built a workflow schedule called Luigi that basically no one uses today.
Speaker 2 I built a vector database called Inoi that, you know, for a brief period people used, but no one really uses today. So I spent a lot of time building a lot of that stuff.
Speaker 2 And then later at Better, I was the CTO and thinking a lot about developer productivity and stuff. And then during the pandemic, I took some time off and started hacking on stuff.
Speaker 2 And I realized I always wanted to build basically better infrastructure for these types of things, like data AI, machine learning. So pretty quickly I realized this is what I wanted to do.
Speaker 2 And that's sort of the genesis of Modal.
Speaker 1 That's cool. How did that approach evolve or what are the main areas that the company focuses on today?
Speaker 2 So I started looking into,
Speaker 2 first of all, just like, what are the challenges with data AI, machine learning infrastructure?
Speaker 2 And started thinking about from like a developer productivity point of view, what's a tool I want to have? And I realized
Speaker 2 a big sort of challenge is like working with the cloud is arguably kind of annoying.
Speaker 2 And like, as much as I love the cloud-free power that it gives me, and I've used the cloud since way back 2009 or so, it's actually pretty frustrating to work with.
Speaker 2 And so in my head, I had this idea of like, what if you make cloud development feel almost like as good as local development, right? Like how does like fast feedback loops.
Speaker 2 And so I started thinking about like, how do we build that and realized pretty quickly, like, well, actually, we can't really use Docker and Kubernetes. We're going to have to throw that out and
Speaker 2 probably going to have to build our own file system, which we did pretty early and build our own scheduler and build our own container runtime.
Speaker 2 And so that was like basically the two first years of Modal, is just like laying all that like foundational infrastructure layer in place.
Speaker 1 Yeah. And then in terms of the things that you offer today for your customers, what are the main services or products?
Speaker 2 Yeah.
Speaker 2 So we're infrastructure as a service, which means like on one side, we run a very big compute pool, like thousands of GPUs and CPUs, and we make it very easy to get, you know, if you need 100 GPUs, we can typically get you that within seconds.
Speaker 2 So sort of one big multi-tenant pool, which means like capacity planning
Speaker 2
is something we kind of take, you know, it's something we solve for customers. They don't really need to think about reservations.
We always provide a lot of on-demand GPUs.
Speaker 2 On the other side, there's a Python SDK that makes it very easy to build applications.
Speaker 2 And so the idea is like you write code, basically like functions in Python, and then we take those functions, turn them into serverless functions in the cloud.
Speaker 2 We handle all the containerization and all the infrastructure stuff. So you don't have to think about all the sort of Kubernetes and Docker and stuff.
Speaker 2 And the real killer app, as it turns out, like we started this company pre-Gen AI, but as it turns out, the main thing that really started driving all the traction was when Stable Diffusion came out.
Speaker 2 And a bunch of people came to us and like, hey, actually, this looks kind of cool. Like you have GPU access.
Speaker 2 It's very easy to, you know, you have to think about spinning up machines and provisioning them so that was like our first sort of killer app was like just doing gen AI in a serverless way with the focus of diffusion models like now we actually we have a lot more of a different modalities like a lot of a lot of usage is still like text to image but we also see a lot of audio and music so one example of a customer I think is super cool building really amazing stuff is Suno
Speaker 2 which does
Speaker 2 AI generated music. So they run all their inference on modal, very large scale.
Speaker 2 There's a lot of customers like that sort of dealing with like, you know, building cool Gen AI
Speaker 2 models. In particular, I would say in the modalities of like audio, video, image, and music, stuff like that.
Speaker 1 It's cool. And I think Suno's using all like Transformer Backbone now for stuff, right? Versus a diffusion model-based thing.
Speaker 2 I think it's a combination of both. I'm not sure, though.
Speaker 1
Yeah, yeah, I think they talk about it publicly. That's the only reason I mention it.
But you wrote in October this post. I think it was called The Future of AI Needs More Flexible GPU Capacity.
Speaker 1 And in general, what I've heard in the industry is that a lot of ways that people use GPU is reasonably wasteful.
Speaker 1 And so I'm a little bit curious about your view on flexibility around GPU use, how much is actually used versus wasted, how much optimization is left, even just with existing types of GPUs that people are using today.
Speaker 2 Yeah, I mean, GPUs are expensive, right? And I think it's sort of kind of as like a paradox, it means that cloud, you know, a lot of the cloud capacity, it's like
Speaker 2 the only way to get it is to sign long-term commitments.
Speaker 2 Which I think for a lot of startups is really not the right model for how things should be.
Speaker 2 Like, I think the amazing thing about the cloud was always to me that you have on-demand access to whatever many CPUs you need.
Speaker 2 But for GPUs, the main way to get access has been over the last few years, due to the scarcity, has been to sign long-term contracts.
Speaker 2 And I think fundamentally, that's just not how startups should do it.
Speaker 2 And I kind of get it. It's been sort of supply-demand issues.
Speaker 2 Just looking at the CPU market,
Speaker 2 the fact that you have instant access to thousands of CPUs if you need it.
Speaker 2 My vision has always been it should be the same thing for GPUs.
Speaker 2 And that means, you know, especially as we shift more to inference, I think for training, it's been sort of a less of an issue because you can sort of just make use of the training resources you need.
Speaker 2 But for inference especially, you don't even know how much you need, right? Like in advance, it's very volatile.
Speaker 2 And it's a big challenge that we solve for a lot of customers is we're fully usage-based. So when you run things on modal, we charge you only for the time the container is actually running.
Speaker 2 And that's a massive hassle for customers' traditions, like doing the capacity planning and thinking how many GPUs.
Speaker 2 And then having the issue of like either you over-provision and you're paying for a lot of idle capacity, or you're under-provision, and then you have, you know,
Speaker 2 when you run the capacity shortage, like you have degradation in service. And so, whereas with Modal, we can handle these very bursty, very unpredictable workloads really well.
Speaker 2 Cause we basically take all these user workloads and just run a big pool of thousands of GPUs across many different customers.
Speaker 1 Yeah. One of the things that always struck me about training is to your point, you kind of spin up a giant cluster, you run a huge supercomputer, right?
Speaker 1
And then you run it for months in some cases, and then your output is a file. And that's literally what you've generated.
You know, it's kind of insane if you think about it.
Speaker 1 And that file, in some sense, is a representation of the entire internet or some corpus of human knowledge or whatever.
Speaker 1 And then to your point, with inference, you need a bit more flexibility in terms of spinning things up and down, or alternatively, if you're doing shorter training runs or certain aspects of post-training, you may need more flexible capacity to deal with.
Speaker 2
Totally. And that's something we're really interested in right now.
Like traditionally, most of Modal has always been inference. Like, that's been our main use case.
Speaker 2 But we're really interested also in training.
Speaker 2 So, in particular, like, probably focus more on these shorter, like, very bursty sort of experimental training runs, not the like very big training runs, because I think that's a very different market.
Speaker 1 So, that's like a very interesting thing we're going to have to. How do you think about meeting people's end-to-end needs? So, I know that there's a lot of other things that people do.
Speaker 1 There's, you know, a lot of people are using RAG
Speaker 1 to basically augment what they're doing. Or,
Speaker 1 you know, there's a variety of different things that people are now doing at time of inference in terms of using compute to
Speaker 1 take different approaches there. You know,
Speaker 1 I'm a little bit curious how you think about the end-to-end stack of things that could be provided as infrastructure and where Modal focuses or wants to focus.
Speaker 2 Yeah, totally. I mean, our goal has always been to build a platform and cover like the end-to-end use case.
Speaker 2 It just turned out that inference was, we were well positioned to focus on that as our first killer app.
Speaker 2 But my end goal has always been to make engineers more productive and focus on what I think it was like the high code side of ML.
Speaker 2 Like I think we're like our target audience tends to be more like sort of the traditional like ML engineers, like people building their own models. But there's many different aspects of that.
Speaker 2 There's like the data pre-processing, then there's the training, and then there's the inference, and there's actually probably like even more things, right?
Speaker 2 Like, you know, having feedback loops where you get the data and like, you know, online ranking models and all these things. And so my goal for Modal has always been to cover all of that stuff.
Speaker 2 And so it's interesting. You see a lot of customers now, we don't have a training product, but a lot of of customers use Modal for batch pre-processing.
Speaker 2
So they use Modal to, you know, maybe they're training a video model. So maybe they have like petabytes of video.
So then they use Modal actually, maybe with GPUs even to like do feature extraction.
Speaker 2 And then they train it elsewhere. And then they come back to Modal for the inference.
Speaker 2 So for us to do training makes a lot of sense. And in general, I think
Speaker 2 it makes a lot of sense to sort of build a platform where you can handle the entire sort of machine learning lifecycle end-to-end and many other things related to that.
Speaker 2 Also the data pipelines and nightly batch jobs and all these things.
Speaker 1 Yeah. I mean, what you describe is a pretty broad platform-based approach.
Speaker 1 I think there's a handful of companies who are sort of in your general space or market. How do you feel that Modal differentiates from them?
Speaker 2 I think first of all, we're cloud native. Like we're just like cloud maximalists.
Speaker 2 Like we went all in and said like, basically, we're going to build a multi-tenant platform that runs everyone's compute.
Speaker 2 And the benefits of that are very tremendous because we could do capacity management much better.
Speaker 2 And that's one of the ways we can offer like instantaneous access to hundreds of gpus if you need to like you can do these like very bursty things and we just give you lots of gpus right i i think the other benefit or the other sort of differentiation is to be very general purpose uh we focus on sort of what i think as i mentioned like high code like we run custom code in our containers in our infrastructure which is a harder problem.
Speaker 2
Like containerization and running user code in a safe way is a hard problem. And then dealing with container call start.
And like I mentioned, we have to build our own scheduler.
Speaker 2 We have to build our own container runtime in our own file system to boot containers very quickly.
Speaker 2 And I think so, unlike many other vendors, like they're only focused on, say, inference or maybe only LMs.
Speaker 2 Our approach has always been to build a very general purpose platform.
Speaker 2 And sort of, you know, in the long run, I hope to sort of, that sort of manifestation will be more clear, because I think there's many other products we can build on top of this now that we have the compute layer sort of kind of becoming more and more mature.
Speaker 1 When I talk to large enterprises about how they're thinking about adoption of AI,
Speaker 1
many of them already have their data on Azure or GCP, or AWS. They're running their application on it.
They've bought credits in the marketplace. They want to spend resident.
Speaker 1 They've already gone through security reviews.
Speaker 1 They've kind of done a lot and they worry about things like latency or pings out to other third-party services versus just running on their own existing cloud provider or their hyperscaler that they work with, or set of hyperscalers.
Speaker 1 Many of them actually work across multiple. How do you think about that in the context of modal in terms of your own compute versus hyperscalers versus the ability to run anywhere?
Speaker 2 Yeah, totally. And of course, there's also sort of security compliance aspect of this.
Speaker 2 I think
Speaker 2 it is a challenge.
Speaker 2 I look back at when the cloud came and I remember back in like 2008, 2009, and the cloud came. And my first reaction was like, how the hell,
Speaker 2 why would anyone put their compute in someone else's computer and run that? And I think, you know, to me, that was just insane. Like, why would anyone do that?
Speaker 2 But over the next couple of years, I realized, actually, it kind of makes a lot of sense.
Speaker 2 And I think now, even like among like enterprise companies, like there's a sort of recognition that like yeah actually probably our computer is more safe in the big hyperscalers and in a similar similar vein i remember talking to snowflake back in say 2012 or something like that and and they had a sort of similar approach where like they basically said like we're going to run databases in the cloud and it's not going to be in your environment you know or maybe in your environment but like we're in infrastructure as a service and i i thought that was nuts and then obviously like i think snowf snowflake now is a very large you know publicly traded company i think they showed that like infrastructure as a service makes a lot of sense and so i i think there is a little bit of resistance to adopting this like multi-tenant model.
Speaker 2 But I think when you look at like security and adoption of cloud, I think we have a lot of tailwinds blowing in our direction. I think security is moving away from sort of a network layer into
Speaker 2 an application layer.
Speaker 2
I think bandwidth costs are coming down. I think there's a lot of tricks you can do to minimize bandwidth transfer costs.
You can store data in like R2, for instance, which has zero egress fees.
Speaker 2 It's something that I think is realistically going to, you know, mean we're going to have to push a lot.
Speaker 2 But I think there's so many benefits of this multi-tenant model in terms of capacity management that to me, it is very clearly like a big part of the future of AI is like running a big pool of compute and slicing it very dynamically.
Speaker 1 You mentioned earlier that one of the things that really caused early adoption of Modal was stable diffusion and sort of these open source models around ImageShen.
Speaker 1 Are there any open source projects or models that you're seeing be
Speaker 1 very popular in recent days or in the last couple of months that have really started taking off?
Speaker 2 That's a good question. I think if anything, it's actually been a little bit of a shift towards more like proprietary models.
Speaker 2 But like proprietary open source models, I guess like Flux, I think most recently has been
Speaker 2
a model that's getting a lot of attention. I'm personally very interested in like audio.
I think on your is like very underexplored.
Speaker 2 I think there's a lot of opportunity for open source models in that space. But I don't think we've seen anything really cool yet.
Speaker 1 What else do you think is missing in the world today in terms of AI infrastructure or infrastructure as a a service?
Speaker 2 So I'm very biased, but I think modal is
Speaker 2 like basically a way to like for engineers to take code and run it.
Speaker 2 And look, I'm very bullish on like, you know, code and like people wanting to write code and building stuff themselves.
Speaker 2 I think outside of sort of LM space, which is like a very kind of a different world in my opinion, I think there's always going to be a lot of applications where people want to train their own models.
Speaker 2 They want to run their own models or at least like run other models, but have like very custom workflows.
Speaker 2
And I just don't think there's been a great way to do that. It's like pretty painful to do do that.
And so I think that's pretty exciting.
Speaker 2
I think on the storage side, there's some other really exciting stuff. Like we haven't really touched storage at Modal.
We focus very much on compute.
Speaker 2 So I'm personally very interested in sort of vector database. Like how's that going to evolve? I don't think anyone really knows.
Speaker 2 I'm pretty interested in like you know more efficient storage around training data. I'm also very interested in like, I guess another thing I'm very fascinated by right now is
Speaker 2 training workloads.
Speaker 2 In order to train large models efficiently, you've had to really spend a lot of money and time setting up the networking.
Speaker 2 So one of the things I'm really excited about is what if you don't, you know, what if we can make training less bandwidth hungry?
Speaker 2 Because I think that would actually change a lot of the infrastructure around training,
Speaker 2 where you can now like kind of tie together a lot of GPUs in different data centers
Speaker 2 and
Speaker 2 not have to, you know, have this like very large data centers with like, you know, InfiniBand and stuff.
Speaker 2 So that's like another sort of infrastructure thing I'm looking forward to seeing more development on.
Speaker 1 How important? So there's sometimes been a little bit of debate around vector DBs, and you mentioned that you actually built one when you were at Spotify.
Speaker 1 I think Spotify today hit $100 billion in market cap. I think it's one of the first European technology companies to get there, which is pretty cool.
Speaker 1 So a lot of folks I know may use one of the existing vector DBs, or in some cases, are just using Postgres
Speaker 1 with
Speaker 1 pg Vector, right? How do you think about the need for vector databases as sort of standalone pieces of
Speaker 1 just, you know, adopting Postgres versus doing something else?
Speaker 2 Yeah, I feel like everyone's debating that. I don't know necessarily.
Speaker 2 Like, I think there's a lot of, there's a case to be made that, you know, you can just stick everything into a relational database and you're fine.
Speaker 2 To me, like the bigger question is like in the long run, like, you know, if you think about like, what's like an AI native data storage solution?
Speaker 2 Like, I don't even know if it's like necessarily has the same form factors and the same interface as a database. So that's actually a bigger question that I'm more excited about.
Speaker 2 It's like, I think people look at vector databases and
Speaker 2 whether it's relation or not, they sort of shoehorn it into this sort of old school model of you put data, you get data back. But I don't know.
Speaker 2 I think there's a lot of room to sort of rethink that in the age of AI and have very different
Speaker 2 interaction models with that data. I know that sounds a little fluffy.
Speaker 1 Yeah, it's super interesting. Could you say more on that?
Speaker 2 I mean, one thing I think a lot about is like maybe the database itself be like the embedding engine, right?
Speaker 2 Like instead of like you put a vector in and you search by that vector, I think there's a lot of, you know,
Speaker 2 the more native, like AI native storage solution would be you put text in, you put, you know, video in, you put image in, and then you can search by that.
Speaker 2 Like, to me, that would be like a more sort of native, AI-native sort of storage solution.
Speaker 2 So that's like one line of thought that I've had is like, maybe we just, we're just like so early to this that like, I think it's going to take five, 10 years for it to really.
Speaker 1
for it to shake up. Yeah, that's really cool.
I guess one other thing that you mentioned was more people seem to be training their own models, at least in a lot of the areas that Modal works with.
Speaker 1 Do you think there's any heuristic that people should follow in terms of when to train their own model versus use something off the shelf?
Speaker 2 I think eventually like for any company where model quality really matters, unless you kind of train your own model in the end, I feel like it's going to be hard to sort of defend the fact that like, you know, you're, you have a better solution.
Speaker 2 Cause like otherwise, like, what's your moat? Like if you don't have your own model, like you need to find some moat somewhere else in the stack. And that might be possible to find.
Speaker 2 It might be somewhere else for a lot of companies. But I think at least if you have your own model and that model clearly is better than anyone else, then that sort of inherently is a moat in itself.
Speaker 2 I think it's more clear outside of the LM space when people are building audio, video, image models.
Speaker 2 I think if that is your core focus, like it's very clear to me, like you kind of have to train your own models in that case.
Speaker 1 Yeah.
Speaker 1 If I remember correctly, you're an IOI gold medalist.
Speaker 1
Yeah, that's right. Obviously, you think a lot about code and coding.
And how do you think that changes with AI over time? Or do you have any contrarian predictions on what happens there?
Speaker 2 I don't know if this is contrarian, but like, I actually think that
Speaker 2 this is just like one out of many improvements in developer productivity. And, you know, you look back at like, you know,
Speaker 2 whatever, like compilers was originally like, you know, a tool that made developers more productive. And then like higher level programming languages and databases and cloud and all these things.
Speaker 2 And so like, I actually don't know if like AI is like, you know, different than any of those those changes in the hindsight. And so,
Speaker 2 and by the way, like every time that's happened, you know, it turns out like there's so much latent demand for software that actually like the number of software engines goes up.
Speaker 2 So like, I feel like you look back at like, you know, the last four years of software development, like every decade, engineers get like 10 times more productive due to better frameworks or better, you know, tooling or whatever.
Speaker 2
And it turns out actually that just unlocks more latent demand for software engineers. So I'm very bullish on software engineers.
I think it would take a lot to sort of destroy that demand.
Speaker 2 I think people look at a lot of like AI as like a kind of fixed sum thing, but in my opinion, it's like, no, it's just going to unlock more latent demand for more things.
Speaker 2 So I'm very bullish on software engineering.
Speaker 1 And then I guess the other field that you touched a long time ago was I think you won a Swedish physics competition in high school.
Speaker 1 And I'm curious if you followed any of the physics-based AI models or some of the simulation related. Like, that's an area that strikes me as very interesting.
Speaker 1 And the way you think about the data is for it are different. And yeah.
Speaker 2 I did win the Swedish High School Physics Competition. I was a total math lead nerd when I was,
Speaker 2 you know, in my teenagers.
Speaker 1
Okay. Yeah.
I think it's a really fascinating area right now. Like it's one of those areas that seems
Speaker 1 like there's some real reinvention needed and not as many people working on it.
Speaker 1 So it's one of the areas I'm kind of excited about just in terms of there's lots and lots of different applications that you could start to come up with relative to it.
Speaker 2 Yeah. I think, I mean, like physics, in my opinion, it's like,
Speaker 2
you know, it looked back at like the golden era of physics, like the 20s and 30s and 40s. I kind of feel like it's like, it hasn't really evolved much the field.
So I don't know.
Speaker 2 Maybe I would love for you to be right that there's like a resurgence of new physics-based models.
Speaker 1
Yeah, I don't know if it would necessarily help in the short run with basic research. I think it just helps with simulation.
It kind of feels like physics as a field
Speaker 1
really kind of doubled down on sort of the Edwitton path of physics and maybe got a little bit lost there or something. I'm not sure.
It's kind of big.
Speaker 2 Are you talking about like more like material, like doing more like compute-based?
Speaker 1 It's kind of like ANSYS or other companies where, you know, you simulate an airplane wing, you simulate load-bearing in in a...
Speaker 2 Oh, I see. So like HPC.
Speaker 2 It's always existed, right? Like, especially in like, you know,
Speaker 2 oil and gas energy and stuff like that.
Speaker 1 But it's a lot of kind of small, bespoke, kind of fine-tuned or hand-tuned models for specific things versus
Speaker 2 meteorology is like something I actually think like deep learning should. like change, right? Like it sort of makes a lot of sense.
Speaker 2 Like you're, you know, deep learning should be very good at like, you know, predicting, you know, turbulence and things like that.
Speaker 2 Like, because turbulence is actually very hard to solve the traditional physics models, right? And so, deep learning should, in theory, I kind of feel like makes a lot of sense.
Speaker 1 Yeah, I think there's been a couple papers on that out of NVIDIA, and then I think Google has a team that's worked on it. And so, there's a couple different sort of
Speaker 1 weather simulation teams that have started to publish some pretty interesting stuff, it seems, you know. Yeah,
Speaker 2 I mean, I would also point to like an adjacent area, like biotech, I think, is like been, you know,
Speaker 2 computational methods have been enormously successful, right? Like, if you look at
Speaker 2 protein folding in particular, but also other things like sequence alignment and things like that. And that's actually a field where we start to see a a lot more usage at Modal as well.
Speaker 2 I feel like there's like a kind of a resurgence of computational biology. It's really exciting for me.
Speaker 1 That's really neat. Yeah.
Speaker 1 Are there specific use cases that you see people engage with most across your customer base relative to the sciences?
Speaker 2 There's a lot of, I'm not a bio person. So this is kind of superficial,
Speaker 2 just kind of looking at our customers. But like one thing I've seen a lot is actually medical imaging.
Speaker 2 Because my understanding is like with modern methods, you can do like very automated, like you know
Speaker 2 get like millions of you know experiments and do you know automated electron microscope imaging of that so we've actually seen a quite a lot of customers like use modal for like then processing and doing computer vision on those mod on those images uh which is kind of cool it's really cool is there um any area that you're most excited about from a human impact perspective for some of these models you know with my background spotify like i i think zuno is like to me very exciting thing uh i i think it's still like very early sort of ai generated music you can still hear that it's like not right.
Speaker 2 It's sort of uncanny value a little bit.
Speaker 2 But like Sunos, like every generation of their model is like getting better and better.
Speaker 2 And first of all, like music in itself tends to be like sort of always like one of the first areas where you see real impact of new technologies, whether Spotify or like iTunes or piracy or like all these things or gramophones going back.
Speaker 2 So I always think music is like an exciting era for that in that sense. Like it always shows like the opportunity of new technologies.
Speaker 2 And I also think like Suno is like fundamentally something you couldn't have done before Gen AI. So that to me is like really exciting.
Speaker 2 It's like sort of really pushing the frontier, enabling a completely new product that there's like, there's no way this sooner could have existed five years ago.
Speaker 1
That's cool. Well, I think we covered a lot today.
Thanks so much for joining me.
Speaker 2 Yeah, thanks a lot. It's great.
Speaker 3
Find us on Twitter at no priors pod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 3 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.