AI's Future: Open Source or Closed Control?  | Dr. Travis Oliphant DSH #1333

AI's Future: Open Source or Closed Control? | Dr. Travis Oliphant DSH #1333

April 17, 2025 30m
AI's Future: Open Source or Closed Control? 🤖 Join Sean Kelly on the Digital Social Hour as he sits down with Dr. Travis Oliphant, a trailblazing AI expert to tackle one of the most pressing questions of our time. Is open source the key to AI's potential, or will closed control dominate its future? 🌐 This episode is packed with valuable insights on AI's rapid evolution, the role of big tech, and how open source could revolutionize industries, education, and even YOUR daily life. From the power of personalized learning to the ethics of AI governance, we’re covering it all. 💡 Discover how AI is reshaping industries like healthcare, gaming, and education, and why owning your own AI might be the game-changer you didn’t know you needed. Plus, hear fascinating stories about quantum computing, the rise of AI in chess and poker, and what open source really means for innovation. ♟️🎮 Don’t miss out on this engaging and eye-opening conversation! Watch now and subscribe for more insider secrets. 📺 Hit that subscribe button and stay tuned for more thought-provoking episodes on the Digital Social Hour with Sean Kelly! 🚀 CHAPTERS: 00:00 - Intro 00:26 - Travis’ Concerns with AI 03:17 - Closed Source vs Open Source AI 08:36 - Most Advanced AI Model 10:58 - Education and AI 12:02 - Benefits of Open Source 15:42 - Full Body MRI Technology 23:45 - Quantum Computing Insights 24:59 - Is AI Overhyped? 26:04 - Open Source AI Discussion 26:47 - Closing Remarks APPLY TO BE ON THE PODCAST: https://www.digitalsocialhour.com/application BUSINESS INQUIRIES/SPONSORS: jenna@digitalsocialhour.com GUEST: Dr. Travis Oliphant https://x.com/teoliphant https://www.linkedin.com/in/teoliphant/ LISTEN ON: Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015 Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759 Sean Kelly Instagram: https://www.instagram.com/seanmikekelly/ The views and opinions expressed by guests on Digital Social Hour are solely those of the individuals appearing on the podcast and do not necessarily reflect the views or opinions of the host, Sean Kelly, or the Digital Social Hour team. While we encourage open and honest conversations, Sean Kelly is not legally responsible for any statements, claims, or opinions made by guests during the show. Listeners are encouraged to form their own opinions and consult professionals for advice where appropriate. Content on this podcast is for entertainment and informational purposes only and should not be considered legal, medical, financial, or professional advice. #ainews #generativeai #openai #aitrends #airesearch

Listen and Follow Along

Full Transcript

to help improve our job prospects. You learn how to adjust really well, right? Really well.
What if you can learn to trade really well? What if you can learn to? Probably can. I totally agree.
So that's exciting to me. But to do that, what we need are millions of professionals, millions of people, tens of millions of people, hundreds of millions of people, billions of people all using AI for their purposes.
That's amazing.

Okay, guys, got Travis here today.

We're going to talk AI, one of the pioneers in space.

Thanks for hopping on today.

Absolutely. Great to be here, Sean.

Yeah, the space is evolving so fast.

Does it concern you at all?

Yeah, it concerns me for a number of reasons,

but probably not the same reasons other people think.

I think there's a lot of things happening quickly and a lot of people trying to make sense of it quickly, even though there's not a lot of understanding of how it actually works. And so there's a lot of uncertainty that can lead to confusion.
So that, that probably concerns me more than anything is just that uncertainty leading to rapid action and not thoughtful action. Yeah.
What are the biggest concerns and red flags you're seeing right now? So kind of overreaction by governments is one that concerns me you know people trying to uh pass laws make regulations where they don't really understand what the implications of those are so kind of ended up with rules and patterns that don't really fit what what emerges yeah so that concerns me i think the other thing that concerns me, a lot of closed source companies just trying to own the space, you know, a whole lot of kind of, um, realist, like a land grab, you know, Oh, here's this AI space. Let's grab all the attention.
Uh, whereas I'm a really big proponent of people, uh, learning from AI and making it part of their toolbox. Yeah.
You know, ultimately letting us become better agents for ourselves by having AI as a tool that we all can use. So there's kind of this land grab going on where a lot of information flow is happening to a few companies.
So that concerns me too. I want to see AI knowledge diffuse and disperse and have lots of people use it effectively.
But you know, there's a lot of money sort of advertising, promoting. You know, it's amazing how quickly people can be informed by narratives, right? We're sort of advertising promoting uh you know it's amazing how quickly people can be informed by narratives right we're sort of driven by narratives we we seek out narratives and world views and way to think and without critical thinking without background you can easily be persuaded by something that just isn't true especially with social media these days yeah exactly exactly and so um that's so ai could be used to actually amplify that capability because you know people are good at it but what if you had ai even be better at it right that's in one sense why social media has been challenging is because it's even uh stupidly not not with any intent at all but just trying to get eyeballs ai algorithms have already been feeding people information that they want to hear and so it reinforces cognitive cognitive confirmation bias and the cognitive dissonance that happens all the time.
So you're just basically being reinforced what you want to hear. And it kind of, it's create polarization in our society.
Right. It's creating people, you know, people are creating enemies out of who could be friends.
And that's one of the things I think, I don't want AI to amplify that. I want to see how can we use AI to understand each other better and actually maybe show a little more empathy to each other and understand how, hey, you know, we're not that different.
We have our differences. That could be beautiful.
But let's not emphasize them because that can lead to conflict. Yeah.
You said a closed source earlier. Could you guys explain what that is and which companies are closed source? Absolutely.
So closed source means that it's actually been the norm for a long, long time. Well, if you go way, way back when software first came around, when hardware was instituted, the software came with the hardware.
It's because you basically people competed on, here's new hardware, here's a new machine to run your business. And the software was all open source.
They didn't have the term then. It was just, you could use the software.
As kind of the 80s and early 90s, then people would say, oh, wait, these are valuable. And the company Microsoft actually was one that said, and Apple and those companies emerged.
They went, hey, there's value in the software. We can't give it away, so we can't show the code.
Because if you show the code, then people can potentially take it, derive from it, build from it. So they would close the code, and then applications would be built from that closed.
Now, it's still, a lot of software gets built that's closed source, so that's fine. It's not like that's some kind of moral evil to close the source.
But it does create challenges for innovation. What open source was is a movement that started around the time Linux came around.
You know, Linux, you've heard of it. It's an operating system that is essentially why we have cloud computing today.
Wow. It's, it's this massive operating system that has just now runs all the servers.
Yeah. You know, it was, it was, it's, it's a pretty impressive kind of movement.
And it's the reason AWS exists. It's the reason GCP exists.
Absolutely. It's hugely impactful.
So open source has been an uh extremely impactful social movement and that's probably the way to describe it because um and i started participating in open source in the 90s late 90s when i was a graduate student uh just i'm kind of a geek at heart i'm a science geek who loves physics and loves math and loves to kind of make things um and i need software to do it. And I wanted to, and so I got wrapped into this open source movement because I liked how when I did the work, I could share it with others.
And that's essentially a lot of us, you know, millions of people have been pulled into this open source ecosystem, sharing their code with each other. So it's kind of this interesting world that's emerged over the past 30 years where people share code.
There's places that that code can be seen and people can build from it. There's lots of movements around that code.
So open source is just this phenomenon of sharing your code and we're going to use it. Closed source is you've got to license the code to use.
But open source, there's lots and lots of perils. We get to have long conversations about what open source means, how it derives value, how do you make money from it? In fact, my story and what I'm doing now really starts there.
I loved open source. I love the engagement that it created.
I love the fact that I could share. People could comment.
People could work with me. And I'd build a community.
Love that. That's cool.
Just, you know, because all of us need community and tribe. In fact, I think that's a critical thing to understand about human behaviors.
You want to have your, your tribe. You want to have your community.
Open source gave a place for people to have community. Yeah.
So is chat GBT open source? No. So, so is that a whole dilemma with them and Elon? Yes.
That's part of it. That's part of it.
I mean, some part of it is, is just egos, right? But a big part of part of it is the fact that Elon gave them money to build open source AI. Got it.
That's why they started was Elon was concerned about Google having all the knowledge of AI. So some of the same concerns that I'm expressing, Elon expressed years ago where he was saying, look, we need to make sure that AI as it emerges isn't just controlled by a few hands.
We have to have lots of people aware of how to use this. And he was worried that Google was actually consulting all the AI experts and with their deep mind, they were, they were advancing very, very rapidly.
So open AI was basically initial tranche of, Hey, let's go give some money, create a foundation and have open source AI. But then, you know, things changed.
There were some different opinions and I don't know Sam well enough and I don't know quite what drove those decisions.

I can understand there's some probably good reasons and reasons that I wouldn't agree with. But so we pushed for a kind of closed AI and then, but, you know, had the release of ChatGPT that had this phenomenal explosion in the world of people going, Oh, these models that scientists have been working on for decades can do interesting things like predict words reliably, predict phrases that sound realistic.
And then more than going beyond that from just words to music and to audio and to video and to images. Now they could go on.
Yes, exactly. We can actually produce a podcast.
And it sounds decent too. It does.
Yeah, it's scary. It is.
No, there's a company I've been consulting with

called Zyfra.

They're out in Palo Alto

and they have a mechanism

to produce speech from text

that produces realistic voice models

that sounds just like somebody.

You clone yourself.

Yeah, and that model is cool to me

because I'm an audio learner.

Like I love podcasts and audiobooks.

So when you can do that,

you can learn really fast.

Absolutely. So I'm more of a visual guy, but I love audiobooks and I love podcasts too.
Yeah. So I understand.
I like to listen at 2x speed. Same.
Sometimes 2x. Sometimes 2x.
I know the recent one can go up to 3x. I'm going, some people I can listen to at 3x speed.
Same. But not Ben Shapiro.
Yeah, he talks too fast. Even 2x at Ben is tough.
It's true. But you're sitting there going, wait, I got to process all this information quickly, can I? Yeah, but sometimes when it's in seven hour Rogan episode, I'll do like 2X for sure.
Yes, right. And that's, oh, that was only three hours.
Yeah, he's had some long ones lately, man. So true.
So there's AI companies coming out of China now. Who do you think has the most advanced one? We're filming this in March, 2025.
Yeah, well, right now, China seems to have some really cool advanced models. DeepSeek showed that it's very advanced.
But Gemini is, Google's actually showing some advanced models. Anthropics showing advanced models.
Actually, some of the open source models are also getting to where they're comparable. So it kind of, it's no longer, fortunately, just a matter of who has the best models.
You really have to start asking for what purpose. Got it.
Right. It's no longer there's one best model.
It's okay. What are you trying to do with this? What's your goal? Do you want to summarize text? Do you want to clone a voice? Do you want to run a podcast? I think that's the future I'm excited about is we're getting away from this race towards the God AI.
Right. Now that's still there.
And there's still a lot of messaging about that. I'm definitely in the camp of, we're not going to incrementally get to the artificial general intelligence or the human-like intelligence.
What we have is definitely a clear intelligence system that may be a part of how the human mind works, but it's not the complete thing. And so that's cool.
But really any value coming out of that is coming from a system that's produced. So you take the model, you take some other hardware, you take some just, you know, computing capability, and you stitch it together in a system that's produced.
Right now, as we're speaking, this manis the rage. Right.
It just came out like within a few others in this past week and everybody's kind of

going, whoa, this is amazing.

Cause I can run my business.

I can do my research report.

I can run a stock report.

I can file my taxes.

What they think.

I mean, it's, it's making games.

Uh, Grok has a great model too, actually.

Uh, Grok, Grok three was just released and it's beating in a lot of measurements, a

lot of the other models.

Wow.

Uh, so Grok is also really a fantastic based model and they have a deep search and they have kind of additional modules around the model that they're starting to release as well that are people going to experience with uh but honestly sean it's um it's really early people so there's a it's easy to kind of have these F1 race concepts,

but it's not really the model that works

because everyone has to kind of ask the question,

what am I trying to use this for?

And what for me is going to be a valuable tool?

And that's going to be the most productive question.

Like for me, like on the side, I'm a chess player.

AI has revolutionized the chess space.

It's caused players to become a lot better.

For example, I played Andrew Tate in chess yesterday. You did.
And I beat him. Really? Because, think about this.
He played chess his whole childhood, but there was no AI or computers back then. So to get better at chess was really hard.
Yeah. Now when I play on the chess.com phone, AI analyzes every single game and I could see where I mess up so I could get better way quicker.
So I love that. I think that's a fantastic use case of AI.

I think it's an important one too.

It's about helping humans get better.

Like I'm a big advocate

for natural intelligence.

Like we have not optimized

how humans learn.

Right.

In fact, I think our education system,

at least in the United States,

is really, really bad.

Really bad.

Terrible.

And a lot of systemic.

And schools are banning AI.

And that's completely a mistake. Yeah.
Because ai needs to be used to help exactly this to make personalized education more possible it can help you take an interest you have and in that moment of interest amplify your capability and and the iteration ability to learn powerful so good actually there's a there's a there's a guy gerald chan he might be a little annoyed that i announced i talk about on this podcast and he's a he's an investor he's somebody that helped that invested anaconda but he gave a talk at berkeley just a few weeks ago about the role ai can have in improving education it was actually quite inspiring i need to watch that yeah i don't think the video's out there but i can send you the paper anybody interested i know he's willing to let the paper be spread. It is a phenomenal discussion of something I think is a critical question.
Because one of the things people are worried about appropriately is how will AI disrupt my work? You know, people are worried about unemployment. They're worried about jobs.
What if AI takes away my job? I'm not a fear-based person. I think those kinds of uh, comment, that kind of commentary is useful, but it can be paralyzing.
It's usually normally better to turn it to a, okay, what do I need to use AI for to help? And I think there is, we can help you use AI to help improve our job prospects. If you'd like to adjust really well, right? Really? What if you can learn to trade really well? What if you can learn to? I totally agree.
So that's exciting to me, but to do that, what we need are millions of professionals, millions of people, tens of millions of people, hundreds of millions of people, billions of people all using AI for their purposes. So to see how we need to convert AI from being a thing somebody else does to us, to a tool that we all use to better ourselves and improve our lives.
That's what I'm about. That's what this open source AI foundation that I recently started working with and joining is all about.
It's recognizing that the same, like I just said that Linux as an open source operating system gave rise to cloud. That same phenomenon of using open source AI, it'll give rise to a future we don't, we can't expect if we keep people in charge of it.
And a lot of people in charge of it, not just one or two people, not just a few thousands of people, but millions and billions of people having access to similar tools. Yeah.
Just level the playing field and help people engage with each other. Now that, that will, you know, a lot of people go, wait, that's going to change everything could and so i'm not all for i'm not i'm not all for kind of um rapid disruption how do we do this in a measured way where people are accountable and people have ways to work together and people do it in their communities and do it in their families and do it in their tribes and do it in their their virtual groups like that's that's how we we already are organized as humans and all little different governance groups.
AI can help us each organize better, help us relate to each other better. And it can bring about this incredible world.
I believe. So that's what I'm about.
That's what I love to try to promote. Open source is how we got here.
I've been involved in open source for a long, long time. I started as a scientist.
Wow. Really, it was during my, I got a master's degree using satellite images to measure backscatter off the earth.
Holy crap. Yeah.
It's intense. It was intense, but it's also very, you know, it's math.
I mean, I know regularly is math not for everybody. Yeah.
But I love math and I love learning as much math as I could. And to me, math is just a tool.
It's a tool that lets you get insight from data. And we did that from the satellite data, backscatter, you basically have electromagnetic radiation.
So you like beam a radar down to the earth, you measure what comes back and then you try to infer what that means about the ice field, about the wind speed and direction over the ocean, about the plant vegetation. So that was my first experience with large-scale data processing.
But I went to the medical area to try to do the same with images, with MRI, with ultrasound. And that industry could progress faster.
It's a little more regulated. And so there's a lot, progress is slower.
That's another topic we could go into, but probably a different data. But go ahead, yeah.
Have you heard of Pernula? I have not. Full body MRI.
They use AI to analyze the MRI. The problem is it's expensive.
So most people can afford this. But yeah, I got it.
They used AI to analyze my results. I learned a lot about my body.
Really? And that's where I hope the future like medicine goes to. And that's the same with my dentist.
So there's holistic dentists now. Yeah.
We'll take photos of your teeth, throw it into AI. It was finding my cavities and it was finding my gum infections.
Sean, I love that. Yeah.
That's actually why I went to be a PhD is to make insurance better. Like actually, because I think that's possible.
Like, and you've looked into why things are so expensive. There are some reasons for it, but there's, these can be more made less expensive.
We could have mri technology at least as pervasive as dental imaging right at least as pervasive as your local doctor could have one i hope we got there because i think they were too yeah it was like 2500 which is a lot for an full body mri you know it is and some of that's the magnet it's expensive but some of it's the processing and you know the some of it is you can actually save money if you don't put as much effort into building a very homogenous magnetic field. Right.
But that requires better data processing. And so your point, if AI can help us process data better, then we can have MRI more ubiquitous.
Yeah. For less of a chance.
So for that, you had to get a doctor to manually review every result. It is.
And there's also a lot, it was expensive to make the field so that the processing was simple. That's the big thing.
So because right now, that's how MRIs work is the, the processing is relatively simple from a mathematical point of view. Because if you have the field slightly inhomogeneous, then the processing is a lot harder, but potentially still possible.
And with AI, hey, we can get there. I also am excited excited about AI just design, AI helping scientists iterate faster.
Just like you said with chess, you learn quickly. What if scientists learn more quickly about how to, you know, what does this mean? What if I make this change? What does that mean? There's a big saying I've come to say all the time, which is innovation is iteration.
Like the speed of iteration determines your speed of innovation. Yeah.
Yes, you need creativity. Yes, you need, you know, it's people who pull that off.
But iterating is really the key to progress. Yeah.
Yeah, I'm also a big poker fan. AI has revolutionized poker.
I bet it has. I call them solvers, but it shows you how to play like the best strategy, the best hand and when and what to bet.
Two hand, I mean, Texas, the Texas poker.as poker yeah well it has all the different all the parameters but it's just that's people have gotten so much better at poker now i agree it's actually a corollary of something i always say which is you know for your job it's not about being replaced it's about being replaced with someone that knows how to use ai better exactly right so you know if you're about your job and AI, just turn that into motivation to learn to use AI. Yeah.
Same with my video editors. All a lot of them are using AI now to find clips.
And it's like, I love that. Like, I don't want to replace you.
Right. I want you to be able to like, give me a ton of clips.
Right. We're still the need the human connection.
I really am a promoter of accountability with people. Like you're not going gonna have AI be accountable.
In fact, that's kind of the thing that's not really the root of it. Like, oh, well, even the, you know, Tesla cars can drive you, can drive you now, but you still have to sit in the driver's seat.
I know the, there are self-driving cars going around cities. The Waymo.
The Waymo's are showing up, but, and a big part of that is actually liability. Who's liable if something goes wrong.
Right. Right.
What if it crashes or something crashes? What if there's a problem? And so ultimately that's the real question that has to be resolved. That will be resolved through, through accountability.
Layers. Right.
So my, my answer is what we accountability is with individuals and then you have a tool that's AI, then you're still accountable. Right.
my developers, so I have a few companies I've worked at, have developers that work with me. And I tell them, look, use AI all you want, but the code you commit to a repository and ship to a customer, you're accountable for that code.
That's your response for the code. You can't say the AI made me do it.
Right? It's fine if the AI helped you. Yeah.
Totally behind that. Like you do that all day long.
But at the end of the year, you're accountable for your shooting. Because you can't sue AI yet.
Right. Yes.
And that's a different question. But we're not, we're not even close to having that conversation.
Yeah. Let's give that about 10 years.
Yeah. We're not full term there yet.
Right. Right.
Do you have fears that some models can go haywire without proper regulation? I think, yes, models could definitely go haywire. I think they already have in a way, in terms of how they disrupted our social contract with each other, our social connectivity.
Yes, they can go haywire. The regulation question, I think, is I'm all for governance.
I'm all for people governing, learning those principles of governance. Every community has governance.
If you don't have community without some amount of governance, I'd rather have it be at that level rather than kind of a huge scale. So you don't want the federal government I don't want the federal government, I don't want the, I'm not saying they're involved entirely.
I want them to be restricted to the things they need to care about. Right.
Right. Not just lay out all AI policy.
That would be a really, I think an ill-suited idea right now. I think because it evolved so fast, by the time they lay out policies, so much could change.
Then they got to keep updating it. But individual departments could have policies about how they use AI in their department.
Right. For example, like health and human services could have a strategy for AI adoption and how they use it.
Yeah. I think that's true.
But like just having some law about how AI, because anyway, what do we mean by AI? You know, AI is just a math program. Like it really is just an array.
It's a multiplying numbers together and then summing them up in a nonlinear function with a little nonlinear, nonlinear is in the middle. You end up with this.
It's just, it's just math. So we're going to regulate math.
Okay. How are we going to do that exactly? I agree.
SEC has been trying to regulate crypto for years and just, and they mess up. It's a mess.
It's a mess. They actually, they do more harm than good.
I think they go too. So I tend to be like, when I was younger, I was, I was quite libertarian, quite open, like get rid of all regulation.
And, you know, in some future world, I might imagine that experiment being interesting, but I think ultimately I recognize that like a value of regulation is to avoid suing each other. Like if you can, because that's the thing you're trying to avoid.
You're avoiding the problem of, hey, we're debating this and I'm mad at you because you did this. I'm mad at you because you did this.
You used to have lawsuits everywhere. We could slow down the whole economy because you got people suing each other with a very inefficient judicial system or a very unjust one too, where sometimes it's not a judicial system that goes into arbitration.
Anyway, there's real problems there. So I can see the value.
I can see the value of regulation. I see the value of good rules, but what are the good rules? So how do we know what those rules are? We can know, but only if we have enough context, enough understanding, enough experience right now, AI is just so new and the rules might be might be different might be different when it comes to this industry versus that industry right it's another industry it's a great point so i think we ought to just like let um let people i'm not saying we just throw caution to the wind and let people hurt each other not saying that right if you have a claim against somebody because they use ai you have that claim like those rules already exist yeah it's going to be interesting to see how it plays out because let's say you ask ai for stock advice yeah it just gave financial advice but can you go after them for that you know if you lose money well i guess i i think in the most of the general ai systems a lot of terms that says you know you can't sue us for stuff you did with this right and and that's pretty fair but if somebody came out with a financial advisor, right? And then offered stock advice and you already have to register to do that, then yeah, you potentially could.
But most of those financial advisors have all kinds of things you sign over saying, no, I know it recognizes this advice and I'm responsible. So I see it again, if we just focus on, we've already got systems, those systems can be improved.
Maybe AI can help us improve them, but let's not panic over AI. I think the thing to be concerned about is, is AI open? Is AI available? And can people actually use it for their accountability? That's really, we want to make AI as distributed as possible.
Agreed hearing a lot about quantum computing as someone in the crypto space um that's that seems to be advancing rapidly so they're they're actually saying it's going to be so advanced they could hack into wallets in a few years that's what they're saying i i tend to be skeptical of those statements yeah i've been on record as being quantum skeptic okay for a long time not that there isn't something there. There is.
There's some really cool things that happen, but we have a really hard time organizing a bunch of quantum bits together and understanding what even that means. Quantum is one of those areas where we're still trying to figure out what, nobody knows what it means.
Quantum mechanics is a description of nature that just gives us a way to predict what nature will do, but what does it mean? We don't know. And so it's easy to get hyped up.
The other metaphor is I'm an electromagnetics guy, and we had optical computers back in the day. Optical computers can do really fast things, like take the Fourier transform really, really quickly just by propane light.
But we don't have optical computers today. They could be useful for some things like MRI and imagery construction.
We do that in an optical computer very fast. But the infrastructure of optical computers, actually building them and the whole ecosystem around it is really expensive.
So I understand that, you know, why quantum computers are exciting, but quite often they're kind of, they're overhyped, right? I'm not saying, they're an interesting research topic and they're an interesting idea, but I'm not saying nobody should ever invest in them. I think they're worth investing in.
But I think most of us, it's going to be a non-issue. We'll not even realize what's happening.
I remember when I was in high school, everyone said 3D printing was the future. We're going to print houses with 3D printing.
Exactly. And then it flopped.
And then it flopped, right? Yeah. I wonder if quantum's going to be like that.
Quantum's kind of like that in the sense of it's really cool tech and really cool science. And, you know, honestly, the only thing is, okay, just make your cryptographic house longer.
That's your fine. Let's see phrases, 16 words now.
Maybe they should make it like 48. Exactly.
And so there's kind of quantum. So I think it's worth thinking about, but most of what I've seen is people get a little bit overhyped about it because they believe all the rest of it.
Again, I still love science. I like the work that people are doing.
I don't want to dismiss the great work of the scientists. I think they're doing amazing things.
I just think commercially, it's not something on our horizon in the next 15 years. Makes sense.
Travis, anything else you're working on or want to close off with here? Yeah, well, I'm working on helping to make sure open source AI exists. Make it source again.
Yes, exactly. We have a phrase actually, make AI open source again.
Oh, I love it. Make AI open source again.
The whole institution behind AI can be better, can be awesome if we make it open and help people own their own AI.

That's a big one.

People need to own their own AI.

Rather than send all your data to somebody else and use a closed model,

own your own AI

and have the model serve you and your data.

Keep your data to your own.

I love it.

Well, we'll link all your companies below

and your social media handles.

Thanks for coming on.

Sean, great to be with you.

Thanks.

Check them out, guys.

And I'll see you next time.

All right, take care.