Public Markets, Image Gen, and Specialized Models, with Sarah and Elad
Show Notes:
0:00 Improvements in image gen
4:42 Public markets
8:08 Effects of tariffs on tech
9:42 Today’s large model market
11:34 Opportunities in specialized models
16:30 Research advances in model approaches
21:10 What expertise will matter?
24:30 Anthropic’s Model Context Protocol
26:30 Consumer applications
Press play and read along
Transcript
Speaker 2 Hey listeners, welcome back to No Priors. Today you've just got me in a lot again.
Speaker 1 It's a favorite type of episode. Sarah Khabibi, how you doing?
Speaker 2
I'm great. I'm so excited.
Everything is adorable cartoons that are also like slightly nostalgic and sensitive. And tell me about how you react to Studio Ghibli and also just better image generation.
Speaker 1 I mean, I'm a long-standing anime fan, so I think converting the world into everything anime or manga is a very positive step for humanity.
Speaker 1 So I view this as something I've been waiting for for a while.
Speaker 1 I feel like every year or two, there's sort of this moment in the ImageGen world where people have a, wow, that's an amazing moment again.
Speaker 1 And the first version of that was like, oh my God, these, you know, I think maybe even the Gann wave was the first wave. There was a Gann artwork in like 2019 or so or 2018 that went to Southby's for.
Speaker 1 um auction which was one of the first sort of um ai generated arts back when people were doing these adversarial network-based network approaches to generating artwork.
Speaker 1 And it was kind of these kludgy tool chains, but even then people were like, whoa, look at what AI can do right now. And it was super bad, you know, in comparison to what you can do today.
Speaker 1 And then there was kind of the mid-journey
Speaker 1 early staple diffusion wave. where those models came out and people are like, oh my gosh, this thing is amazing, but everybody has seven fingers in the images, but oh my God, it's amazing.
Speaker 1 And look at all the things we can do with it and it's going to transform society, et cetera, et cetera. I feel like we've periodically had these and I feel like this is the latest version of that.
Speaker 1 And part of it is we're just on this amazing curve of quality and fidelity in this artwork and the ability to do, I mean, even back in the GAN world, there was like style transfers and, you know, do this in the style of Van Gogh and et cetera.
Speaker 1 But the degree to which it does it so well now and so cohesively and in so many styles and with so much aesthetic beauty and oversight is really striking.
Speaker 1 And I think we're just hitting another one of those moments where people are like, wow, this can really do it for forms of animation and other things.
Speaker 1 And all this is obviously in the context of ChatGPT and OpenAI and sort of the 4.0 model sort of incorporating a lot of this stuff directly in. So I think it's fantastic.
Speaker 1 We're going to see another thing like this in another year, I think. And then I think there'll be the very commercial versions of this, which are already sort of happening.
Speaker 1 But look, we can use it for graphic design completely seamlessly versus it kind of works. And we can use it for all these different use cases.
Speaker 1 And so I feel like we're doing the horizontal version of it. And soon we'll have the vertical versions all come out.
Speaker 1 And obviously there's companies like Recraft and others working on the vertical versions directly. But I just view this as a super interesting evolution of the technology.
Speaker 1 So I think it's super exciting. What do you think?
Speaker 2 I think it is funny how much, at least like our little niche of the technology ecosystem, but like, you know, anime and mango is pretty popular.
Speaker 2 The world reacts to like, they want more cue, they want more beauty. I think it's really exciting.
Speaker 2 One of the interesting things this exposes is users, people overall are very good at projecting like where we are in terms of quality and controllability and how much more room we have, right?
Speaker 2 I think like going from, you know, it's eight bits of grayscale to you have images that might be perceived as photos of real people was a huge jump to your point of people being shocked at some point, you know, in two generations ago of image generation.
Speaker 2 And then, you know, I think one of the things that MidJourney did was really have an aesthetic point of view and like take a bunch of user feedback into account in terms of what was preferred.
Speaker 2 I actually feel like a lot of people thought of image generation, like end users, not researchers, as, you know, a little bit more of a solved problem.
Speaker 2 And I think just this is another data point of how much more like we're going to get and that people want, never mind in video and everything. Yeah.
Speaker 1 Also text and logos and there's just, there's just a lot that's coming that people haven't done or sort of these truly integrative things where you can start truly clicking into the images and modifying pieces.
Speaker 1 And there's, there's apps that are doing that, or there's things like Korea that sort of do these real-time modifications as you're working on things. But I do think there's so much room still.
Speaker 1 We're very early, but it's still so striking. So it's a very exciting area.
Speaker 2 And I think ease of controllability is also going to give people a lot more creative power.
Speaker 2 Like one of the things that Hey Gen is demonstrated is going to come out with and product very recently is the ability to use natural language to describe emotion and voice, right?
Speaker 2 So you can like whisper ASMR and just, you know, say, I want the whole video with this person in this way with three, you know, three words of text description.
Speaker 2 I think that kind of controllability is going to be really powerful.
Speaker 1 You can incorporate it into augmented devices, and then I would just be working through an MSR world. That's all I would, I would just live in that.
Speaker 2 Is that the ideal? No.
Speaker 1 Maybe the mango part, but the rest, not so much.
Speaker 2 Are you freaked out about the macro?
Speaker 1 You mean the NASDAQ or what?
Speaker 2 The markets? Yeah, the markets.
Speaker 1 Tariffs, inflation, which part of it?
Speaker 2 You know, consumer confidence is at a multi-year low. The NASDAQ's down 8%.
Speaker 2 Tariffs on Chinese imports and on autos. I think there are investors and companies in market talking about how stressed they are about that.
Speaker 1 Yeah, I'm not very stressed about it.
Speaker 1 I feel like there's a degree of uncertainty in the world right now for sure.
Speaker 1 But from the perspective of people building technology companies borrowing something truly existential happening, it's kind of business as usual.
Speaker 1 And I've been through a few of these cycles now where markets are way up and everybody's freaking out in a different direction and markets are way down and people.
Speaker 1 And, you know, the main place where it impacts the venture world or the startup world sometimes is if it soaks money out of the venture capital ecosystem and therefore valuations come down or there's less funding for the marginal startup or things like that.
Speaker 1 But other than that, these sorts of cycles tend to really wash away unless you're a super late stage company that's about to go public and there's some issue with your valuation in terms of expectations versus where you'd want to go out or something like that.
Speaker 1 But for day-to-day
Speaker 1 technology startups, particularly ones that are not doing hardware, which would be impacted by the tariffs, right?
Speaker 1 People who are just writing software, it should really be of minimal actual day-to-day impact especially if your startup's working like you'll you'll be able to get customers to pay you or find um funding or whatever it may be i've been through a few of these and every time it's been a bit of a of a shrug i actually remember i went to the rest in peace good times presentation that sequoia did in 2008 so back in 2008 there was a great financial crisis and i was running a startup at the time i was ceo of this small company and sequoia did this big all hands where they pulled together all their founders and they had people come in and tell war stories from when the dot-com bubble collapsed and how it's time to batten down the hatches and do layoffs and the world will never be the same again and everything's over.
Speaker 1 And they were doing this as a service to the startup community, right? They were trying to help their founders kind of figure this stuff out.
Speaker 1
And I remember talking to one of Sequoia partners during it. I'm like, we're like a six-person startup, like who cares? And he's like, yeah, you're right.
You shouldn't worry about this at all.
Speaker 1 And that's as all these financial institutions were collapsing around us. And so this strikes me as like very small in comparison to that.
Speaker 1
And I think back then that didn't have that much of a real impact to tech. You know, maybe Google did its first layoff ever, but other than that, tech just kept coming along.
And if anything,
Speaker 1 the biggest tech companies in the world are now 10, you know, 20 times bigger than they were back then.
Speaker 1 So I think this is, this is an even more minor blip that from a long-term tech perspective, like who cares? But again, barring some unexpected paths that's splitting off of this. I don't know.
Speaker 1 What do you think?
Speaker 2 It has like almost no impact on me, right? I think especially at the early end of the market, I'm like, well, the really high quality opportunities are, there's plenty of capital for them.
Speaker 2 I keep discovering that the capital markets are much deeper, and we should talk about this, are much deeper than I thought for very expensive, for example, foundation model plays.
Speaker 2 I still expect like capital availability and a lot of inflow there. I think it's probably a little different for investors who have
Speaker 2 more public equities exposure, right? I bet pre-IPO crossover investors are getting more cautious, right?
Speaker 2 You have those sort of much more long-term issues of liquidity having been starved for several years now. But I think the return of MA and several companies ready to go public
Speaker 2 will help that somewhat.
Speaker 1 The place where the tariffs kind of matter that I think are interesting is for very specific industries where to some extent it's useful for America or the West to protect themselves.
Speaker 1 So I think automotive would be a good example where some of the Chinese car companies seem to be getting so good that if I was Europe, for example, and given the industrial base is so automotive dependent, I would probably be pushing for tariffs relative to Chinese imports imports of cars, right?
Speaker 1 Because the internal car industry may not be as competitive. And so I do think there are some areas where the tariffs may be useful.
Speaker 1 There'll be some areas where they're probably being used as like a negotiation tool, and then some areas where, you know, they may be either net beneficial or net harmful in terms of actual costs passed on and things like that.
Speaker 1 But I think there may be a few areas where we should make sure that we actually have some in place. And then there may be some areas where it's going to be net negative or destructive.
Speaker 1 And then there may be some areas where it's just good for negotiating broader policy or relationships with certain external parties.
Speaker 1 People are kind of using a catch-all for all of them versus, you know, looking item by item.
Speaker 2 Yeah, I agree with that.
Speaker 2 And I think the productive version of tariffs is as, you know, a I think there's a need for a broader industrial policy that is more supportive of the industries that we care about.
Speaker 2 And like, that's going to be a big investment, right?
Speaker 2 If we want to make key components for defense or automotive in the United States, like we are quite behind in many domains in terms of getting competitive from a skill and cost perspective.
Speaker 2 And some of those things are worth investing in on both the positive and the protection side.
Speaker 1 Yeah, I guess you mentioned depth of funding for models as part of all this. What do you think is happening in the foundation model world?
Speaker 2 You and I were just talking about these artificial analysis charts showing convergence, like kind of monotonically more competitive market for capabilities and amazing improvement over the last 18, 24 months.
Speaker 2 But you just had the most recent Gemini release from Google. Like they're clearly still in the game.
Speaker 2 I don't know who was doubting that given they have infra, they have researchers, not just researchers, but very smart people at the helm. They're competing here as well.
Speaker 2 I think one of the more interesting things is that you have convergence not just on capability, but also in the product surface areas.
Speaker 2 Most people have search, they have a research product, they have reasoning in the models. I think like a lot of it is going to end up with consumer surplus and distribution being the question.
Speaker 1 There's actually a really great website called artificialanalysis.ai that shows different benchmarks that they've run against these various models for reasoning or for different aspects of how you test a model for knowledge base or for other forms of performance, speed of tokens per time unit, et cetera, et cetera, et cetera.
Speaker 1 So I think that's really worth taking a look at. And you see that for certain areas, there is really strong convergence.
Speaker 1 And then there's almost like a cluster of models that seem reasonably within ballpark.
Speaker 1 And again, certain things spike dramatically in one form or another around coding or around reasoning or other things. And then you have sort of a longer tail of other models.
Speaker 1 And so at least for the core language model world, which those benchmarks are for, there definitely seems to be some forms of convergence happening. And then there's outliers, right?
Speaker 1 Like Grock or X.ax coming out of nowhere with a roughly SODA model in like nine months was super impressive. Or some of the things DeepSeek or others have been doing.
Speaker 1 And then
Speaker 1 they don't really have benchmarks for ImageGen, although those obviously exist on a variety of sites and other places.
Speaker 1
But then there's a whole other suite of models that I think are discussed a lot less. And part of that is just the economic value.
Part of it's what's in the market today.
Speaker 1 But that's things like physics, it's materials, it's robotics, it's certain types of science, right?
Speaker 1 It may be things that are more specialized in terms of post-training, like health-related data on top of some of these core models. And so I do think that there's a lot of
Speaker 1 other types of models that people spend a lot less time on, some of which are becoming quite interesting.
Speaker 1 Probably the place that gets the most attention outside of the foundation model world, or the core LLM world, I should say, the language models, is probably actually biology, right?
Speaker 1 I feel like there's a new biology model every week. But there's all these other fields and disciplines where I actually think there's some very big opportunities.
Speaker 1 And opportunities obviously are both societal in terms of impact, but also In some cases, I actually think there's very big markets behind them.
Speaker 1 And I think often the interest level of people working in the industry to build models is divorced from the economic value of these models, right? And sometimes that's rightfully so.
Speaker 1 You know, there may be really interesting scientific applications that aren't very commercially applicable.
Speaker 1 And sometimes it's really misaligned where you're like, why are all these things getting funded when there's these wide open spaces for certain types of models that just nobody's working on?
Speaker 1 And so at least I've been looking a lot at what are these alternative models that are interesting from a market perspective that maybe are getting a little bit ignored right now.
Speaker 1 And And then I guess there's the other question of like, is this, and I'd like to hear your thoughts on this, like, how many things do you get subsumed into these core LLMs versus their own standalone thing?
Speaker 1 Like, do you think it's all one ring to rule them all? Or do you think it's going to be a fragmented landscape? And where do you think that fragmentation happens?
Speaker 2 It's somewhat of like too binary a distinction to say like it's a model company versus not a model company.
Speaker 2 Actually, even many of the companies that you and I in the industry would consider to be like model research companies, they are starting with some base of pre-training of like existing knowledge, which is more and more readily,
Speaker 2
existing knowledge and reasoning that is more and more readily available. In the case of robotics, you start with video pre-training.
The case of other domains,
Speaker 2 if you're going to start separately focusing on code, and we can talk about whether or not that's a good idea, you want both language and code in terms of being able to interact with the model.
Speaker 2 I 100% believe that there are big opportunities in some of these domains, but one of the biggest distinctions to me is
Speaker 2 what does like the data collection engine for this look like.
Speaker 2 So if you are thinking about physics, chemistry, biology, robotics, like and you know, maybe even some more near-term commercial applications,
Speaker 2 the data you would want, the understanding for the model to learn from, it often doesn't exist yet.
Speaker 2 And so I think a theory of many of these companies that is interesting is our job is to go collect or generate it efficiently and use that to train the model.
Speaker 2 And in that case, I think the question of like, does it need to be, you know, will it be in this single model to rule them all?
Speaker 2 There's a question of, well, is it reasonable to expect one of the existing large labs to go do that data generation?
Speaker 2 Like if you have to set up a physical lab with robotics to do experimentation on new chemicals, that feels more far afield than code generation in RL environments, for example.
Speaker 1 Anytime you go into into the physical world, it's always harder to generate data.
Speaker 1 And that's one of the reasons that the language models where you just effectively collect the wisdom of the internet digitally are the first places where we've really seen this scale of sort of breakthrough happen in recent times.
Speaker 1 And coding is a great example where you not only have a lot of the data resident either online or digitally, but also
Speaker 1 you have very clear utility functions or things that you can test against in terms of code and its performance and et cetera? Is it doing what you think it's going to do?
Speaker 1
So, those are always going to be the easiest areas. It's kind of funny.
This is an odd pet peeve of mine, but it always annoys me when
Speaker 1 people who do really well as founders in traditional software and tech
Speaker 1
start telling everybody else to go and do the hard stuff in biology and materials and physics. And, oh, you need to go be hardcore.
And you're like, well, you made all your money in fucking software.
Speaker 1 What are you talking about?
Speaker 1 And so, I feel like there's been a long history of that, right? Like I remember interviews with Bill Gates from 20 years ago. It was like, if I was to start today, I'd go into biology.
Speaker 1 So I feel like sometimes there's the model versions of this.
Speaker 2
You're so funny. I feel like you're, you're the opposite.
You're like, I actually have a PhD in biology.
Speaker 1 That's why I know. That's my new reality.
Speaker 2 I think the other distinction I would draw is like, is it some like orthogonal, like totally different technical thesis?
Speaker 2 Do I think there's like a research advance that is just very different, architecturally quite different? I'll like describe categories of companies that could be relevant here.
Speaker 2 We had Curin and Albert from Cartesia on the podcast. I think states-based models are an interesting direction that are highly efficient for certain types of data that are compressible, right?
Speaker 2 If you look, there are several plays on like formalism and like translating problems into lean and taking that as a path to increasing reasoning capability for math and code.
Speaker 2 I think
Speaker 2 there are a number of companies that are trying to train models that are better at taking
Speaker 2 actions in software and on the web. This is clearly also right in line of the
Speaker 2 large foundation model labs, but I think they're at least trying to work on a question that doesn't feel fully answered in terms of
Speaker 2 consistent, generalizable RL environments for agents. And so there are spaces where I think there is a theory of why the company should exist, if true, versus just being like straight in line of
Speaker 2 the OpenAI Anthropic X, like Steamroller, of course, and Google Steamroller. What did I miss? What else do you draw as a distinction or like where do you think there is opportunity?
Speaker 1 To your point, on state space models, there may be advantages in terms of the speed and size of some of those models on a relative basis for very specialized tasks.
Speaker 1 And so usually I think of it as a two by two matrix where you have like one axis, which is sort of speed, performance, cost, because those are roughly the same thing for many of these models, is inference time effectively.
Speaker 1 And then there's
Speaker 1 reasoning, fidelity, whatever you want to call it.
Speaker 1 And depending on where you are in those different quadrants, you have one quadrant, which is like, it's slow and it's expensive and it's not very smart. And obviously nobody wants to use those models.
Speaker 1 There's the, it's very slow and expensive, but it's very smart and very capable.
Speaker 1 And that's where you're like, I'm going to upload a 100-document Supreme Court brief and it'll give me this amazing analysis I can use to argue a case or whatever, right? So high value.
Speaker 1 And
Speaker 1 it'll take a while to process and do it. And then there's the super fast, super performant tends to just be these very specialized niche models for specific applications.
Speaker 1
And I think some of the space state models tend to work very well for that, some of the SSMs. for very specific application areas.
And then there's the last quadrant.
Speaker 1 And, you know, based on which of those quadrants you're in, I think it really determines the type of things that you can build.
Speaker 1 And some of the, you know, the really fast, high-performance tend to be more vertical focused or tend to be more focused on very specific types of tasks.
Speaker 1 And the really slow, you know, expensive ones that are actually very performant, you could imagine verticalized versions, but it seems like the backbone for a lot of those are actually these very generalizable models where a big chunk of what you're getting is the reasoning and broader linguistic capabilities that you then apply to a domain.
Speaker 1 And then, of course, there's stuff that people build on top of it in terms of orchestration layers and specialized bespoke things that route things at different models differentially relative to your use case.
Speaker 1 And it seems like everything that's quote-unquote agentic right now is basically doing that, you know, across customer success and code.
Speaker 1 And you go through every domain that has like a specialized approach. And they always have this sort of orchestration layer built on top.
Speaker 1 So, you know, I think it's super exciting to watch all this stuff. And I do think some of the applications and some of the less just purely linguistic domains may be interesting in the short run.
Speaker 2 I think going back to the question of like, is the macro stressing you out?
Speaker 2 There's like such a virtuous cycle in technology happening right now. This is actually quite dominated by the fact that MNA is alive again, and so we're going to have outcomes.
Speaker 2 But like to your point, there's exploding surface area of stuff that
Speaker 2
these models can attack. You have research progress, people making different technical bets.
You mentioned DeepSeek. I think model development and continued increase of the
Speaker 2 more aggressive use of reasoning and test time compute is quite expensive and training continues to be more expensive.
Speaker 2 So I think the fact that there are now people trying to solve data and scale and latency problems, like that'll help everybody too.
Speaker 1 Do you know if it's true that the deep seek researchers are not allowed to leave China?
Speaker 2 I do not know if that is true. I think you in any country should want to hang out to your best talent, but perhaps not restrict people's movement.
Speaker 2 I think we should be trying to attract great talent here.
Speaker 1 We should keep all the AI researchers in the mission district and just not let them leave.
Speaker 2 Somewhere between mission and dog patch.
Speaker 2 Yeah. Like actually we could just draw a line between our offices.
Speaker 1 They all have to go to Atlas Cafe every day.
Speaker 2 Let's talk through the like talent categories.
Speaker 2 Actually, for anybody who is not thinking about their kids in 10 years from now, but just thinking about the like next two, three years, like what type of expertise is valued where you should stay between my office and a lot of the mission and the dog patch.
Speaker 2
Okay. Like you have researchers, you have infrastructure, scaling, and efficiency.
We welcome all of you.
Speaker 2 Hardware, software, co-design, right? Like design, you know, the next generation, TPU or whatever.
Speaker 1 There's a special visa for you to move into that region.
Speaker 2 Yes, we're here to sponsor you.
Speaker 1 Visa program.
Speaker 2 Yes. If you are ready to design chips to better handle sparsity or massive MOE models or something, like
Speaker 2 I've got a visa campaign for you. Kind of what you said, right? Like anybody who has got deep like domain user understanding combined with the basic product engineering.
Speaker 2 It's not basic, but the product engineering sense for this like orchestration applied ML area, evals for agents, setting up RL environments, like still very nascent area of like gather context, plan, make like a bunch of model calls, parallelize, verify, retry, like this orchestration that I already described.
Speaker 2
All of that, we've got a visa program for you. We're thinking about naming it.
We'll hire somebody to run it. It'll be great.
Speaker 1 We'll call it Gillingo.
Speaker 2 We're going to work on the marketing.
Speaker 1
I feel we're in the business as usual phase of AI. I think the stack is reasonably well defined.
And obviously it'll change and there'll be new things But I feel like
Speaker 1 if anything, the last couple of months have been very clarifying in terms of the consolidation of the things that are short-term crucial.
Speaker 1 And there's the model layer of that and all the various accoutrements around agentic stuff and reasoning and et cetera.
Speaker 1 And obviously that will only accelerate and get dramatically better and it's on its own scaling curve. And then on the infrastructure layer, I think that's solidified a bit.
Speaker 1 I remember when RAG was a big deal about a new thing, you know, all these things are, I feel like, are kind of falling into place, evals and how do you do them?
Speaker 1 And I think things are solidifying there with companies like Braintrust and others.
Speaker 1 And then I feel like on the application layer side, I think bought into a notion we've been discussing for a year or two now around AI really starting to impact different services related industries and vertical applications and different use cases.
Speaker 1 And then I'm starting to finally see some inkling of consumer stuff again. And I think it's nascent and early, but at least people are trying.
Speaker 1 I feel like there was two or three years where nobody is really trying to do anything consumer, although one could argue that Perplexity and Chat GPT and Midjourney and all these sort of presumery things were early consumer forays, right?
Speaker 1
And so maybe ChatGPT is the world's biggest new AI consumer product. I mean, Google was really the original one in some sense.
It feels like a period of brief consolidation.
Speaker 1 And in a handful of verticals, I think we're starting to see some of the winners emerge.
Speaker 1 And so I think it's an interesting clarifying time. And of course,
Speaker 1 the thing I say about AI is that the more I learn, the less I know, right? It's the only industry where I feel like the more I learn about the market, the more confused I am.
Speaker 1 I feel like there's this brief moment of clarity and then I'm guessing in a year, all bets are off and
Speaker 1
all sorts of things will scramble again. But at least for now, it feels to me like a few things have kind of fallen into place.
for at least temporarily.
Speaker 2 This actually feels like a very comfortable time to invest for me, because to your point, it feels more like a, I don't know, maybe it's like inning three instead of inning one, where there's a little bit of stability in the ecosystem.
Speaker 2 There's a real goodness around standardization, some standardization of integration with different like MCP, I think is going to accelerate a bunch of development for people.
Speaker 2 Like I'm meeting companies where they set up a data source that is useful to the enterprise in some way that
Speaker 2 these models can interact with well. And they're like, oh, MCP server.
Speaker 1 Do you want to quickly explain to people Model Context Protocol and MCP and what that is and how it works.
Speaker 2 I'm going to fudge this, but I will try to describe it. So
Speaker 2 this is an attempt by Anthropic, came from
Speaker 2 Ben Mann's group and labs.
Speaker 2 It's called Model Context Protocol, which is an attempt to spec out a standard interface for connecting like model capabilities to systems where you already have useful data.
Speaker 2 That could be like documents, it could be logging, it could be business tools, it could be like the IDE, whatever. And
Speaker 2 Sam from OpenAI said like they're going to support it as well. And I think this is not a complete solution.
Speaker 2 It has gotten a lot of popularity with developers over a very brief period of time, but it's just how you expose your data to the model.
Speaker 1
And it's an open standard, so it's not proprietary. Anybody can use it.
And it's like a two-way connection between data sources and AI-powered tools.
Speaker 2 And big companies have done it. Yeah.
Speaker 2 I think there's still a bunch of work for developers to do in terms of like describing their tools and how to use them very specifically and cleanly, but it does make it much easier.
Speaker 2 And I think it will accelerate agent development a lot. But going back to this idea of like, what does it mean for the ecosystem? I think the fact that you have like
Speaker 2 like you're accelerating the ways for models to interact with existing ecosystems, we expect agents to get better.
Speaker 2 You have a bunch of choices around model availability. As you said, there's this like clear pathway about how to automate certain types of work that is orchestration of these capabilities.
Speaker 2 And I think that's going to be super fertile.
Speaker 2 I do think it's very unclear what types of winning consumer experiences are possible here.
Speaker 2 There aren't consumer agents that don't look just like search or research, you know, in the large model products that are really working yet that I've seen. But I expect to see them this year.
Speaker 2 I'm excited about it.
Speaker 1 Yeah, I think it's cool stuff coming.
Speaker 2 When everything destabilizes, Aladdin and I will be back on No Priors. We'll talk to you all then.
Speaker 1 It's going to get destable again, but I think it's a moment of calm. And calm is all relative, right? There's enormous innovation, huge changes coming, big technology waves, new things every week.
Speaker 1 But at least there's a little bit more of a view of okay, who are going to be some of the main players in some of these areas? And, you know, how do all these things fit together?
Speaker 1 So I think we should enjoy the calm while it lasts for the next week or whatever it is, the next few hours before the next thing drops.
Speaker 2 All right, signing off, y'all.
Speaker 1 Good to see you.
Speaker 2
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 2 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.