Sam Bankman-Fried - Crypto, Altruism, and Leadership

45m

I flew to the Bahamas to interview Sam Bankman-Fried, the CEO of FTX! He talks about FTX’s plan to infiltrate traditional finance, giving $100m this year to AI + pandemic risk, scaling slowly + hiring A-players, and much more.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Episode website + Transcript here.

Follow me on Twitter for updates on future episodes

Subscribe to find out about future episodes!

Timestamps

(00:18) - How inefficient is the world?

(01:11) - Choosing a career

(04:15) - The difficulty of being a founder

(06:21) - Is effective altruism too narrowminded?

(09:57) - Political giving

(12:55) - FTX Future Fund

(16:41) - Adverse selection in philanthropy

(18:06) - Correlation between different causes

(22:15) - Great founders do difficult things

(25:51) - Pitcher fatigue and the importance of focus

(28:30) - How SBF identifies talent

(31:09) - Why scaling too fast kills companies

(33:51) - The future of crypto

(35:46) - Risk, efficiency, and human discretion in derivatives

(41:00) - Jane Street vs FTX

(41:56) - Conflict of interest between broker and exchange

(42:59) - Bahamas and Charter Cities

(43:47) - SBF’s RAM-skewed mind

Unfortunately, audio quality abruptly drops from 17:50-19:15

Transcript

Dwarkesh Patel 0:09

Today on The Lunar Science Society Podcast, I have the pleasure of interviewing Sam Bankman-Fried, CEO of FTX. Thanks for coming on The Lunar Society.

Sam Bankman-Fried 0:17

Thanks for having me.

How inefficient is the world?

Dwarkesh Patel 0:18

Alright, first question. Does the consecutive success of FTX and Alameda suggest to you that the world has all kinds of low-hanging opportunities? Or was that a property of the inefficiencies of crypto markets at one particular point in history?

Sam Bankman-Fried 0:31

I think it's more of the former, there are just a lot of inefficiencies.

Dwarkesh Patel 0:35

So then another part of the question is: if you had to restart earning to give again, what are the odds you become a billionaire, but you can't do it in crypto?

Sam Bankman-Fried 0:42

I think they're pretty decent. A lot of it depends on what I ended up choosing and how aggressive I end up deciding to be. There were a lot of safe and secure career paths before me that definitely would not have ended there. But if I dedicated myself to starting up some businesses, there would have been a pretty decent chance of it.

Choosing a career

Dwarkesh Patel 1:11

So that leads to the next question—which is that you've cited Will MacAskill's lunch with you while you were at MIT as being very important in deciding your career. He suggested you earn-to-give by going to a quant firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice? And maybe you should’ve been advised to start a startup or nonprofit?

Sam Bankman-Fried 1:31

I don't think it was literally the best possible advice because this was in 2012. Starting a crypto exchange then would have been…. I think it was definitely helpful advice. Relative to not having gotten advice at all, I think it helps quite a bit.

Dwarkesh Patel 1:50

Right. But then there's a broader question: are people like you who could become founders advised to take lower variance, lower risk careers that in, expected value, are less valuable?

Sam Bankman-Fried 2:02

Yeah, I think that's probably true. I think people are advised too strongly to go down safe career paths. But I think it's worth noting that there's a big difference between what makes sense altruistically and personally for this. To the extent you're just thinking of personal criteria, that's going to argue heavily in favor of a safer career path because you have much more quickly declining marginal utility of money than the world does. So, this kind of path is specifically for altruistically-minded people.

The other thing is that when you think about advising people, I think people will often try and reference career advice that others got. “What were some of these outward-facing factors of success that you can see?” But often the answer has something to do with them and their family, friends, or something much more personal. When we talk with people about their careers, personal considerations and the advice of people close to them weigh very heavily on the decisions they end up making.

Dwarkesh Patel 3:17

I didn't realize that the personal considerations were as important in your case as the advice you got.

Sam Bankman-Fried 3:24

Oh, I don’t think in my case. But, it is true with many people that I talked to.

Dwarkesh Patel 3:29

Speaking of declining marginal consumption, I'm wondering if you think the implication of this is that over the long term, all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns of consumption. They’re risk-neutral.

Sam Bankman-Fried 3:40

I wouldn't say all will, but I think there probably is something in that direction. People who are looking at how they can help the world are going to end up being disproportionately represented amongst the most and maybe least successful.

The difficulty of being a founder

Dwarkesh Patel 3:54

Alright, let’s talk about Effective Altruism. So in your interview with Tyler Cowen, you were asked, “What constrains the number of altruistically minded projects?” And you answered, “Probably someone who can start something.”

Now, is this a property of the world in general? Or is this a property of EAs? And if it's about EAs, then is there something about the movement that drives away people who took could take leadership roles?

Sam Bankman-Fried 4:15

Oh, I think it's just the world in general. Even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do well, if they were run well, that we'd be excited to fund. And the missing ingredient quite frequently for them is the right person or team to take the lead on it. In general, starting something is brutal. It's brutal being a founder, and it requires a somewhat specific but extensive list of skills. Those things end up making it high in demand.

Dwarkesh Patel 4:56

What would it take to get more of those kinds of people to go into EA?

Sam Bankman-Fried 4:59

Part of it is probably just talking with them about, “Have you thought about what you can do for the world? Have you thought about how you can have an impact on the world? Have you thought about how you can maximize your impact on the world?” Many people would be excited about thinking critically and ambitiously about how they can help the world. So I think honestly, just engagement is one piece of this. And then even within people who are altruistically minded and thinking about what it would take for them to be founders, there are still things that you can do.

Some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail—and that's okay. Most startups and especially very early-stage startups should not be trying to maximize the chances of having at least a little bit of success. But that means you have to be okay with the personal fallout of failing and that we have to build a community that is okay with that. I don't think we have that right now, I think very few communities do.

Is effective altruism too narrowminded?

Dwarkesh Patel 6:21

Now, there are many good objections to utilitarianism, as you know. You said yourself that we don't have a good account of infinite ethics—should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving?

Sam Bankman-Fried 6:35

So I don't think it has a super large impact on my giving. Partially, because you'd need to have a concrete proposal for what else you would do that would be different actions-wise—and I don't know that that I've been compelled by many of those. I do think that there are a lot of things we don't understand right now. And one thing that you pointed to is infinite ethics. Another thing is that (I'm not sure this is moral uncertainty, this might be physical uncertainty) there are a lot of sort of chains of reasoning people will go down that are somewhat contingent on our current understanding of the universe—which might not be right. And if you look at expected-value outcomes, might not be right.

Say what you will about the size of the universe and what that implies, but some of the same people make arguments based on how big the universe is and also think the simulation hypothesis has decent probability. Very few people chain through, “What would that imply?” I don't think it's clear what any of this implies. If I had to say, “How have these considerations changed my thoughts on what to do?”

The honest answer is that they have changed it a little bit. And the direction that they pointed me in is things with moderately more robust impact. And what I mean by that is, I'm sure one way that you can calculate the expected value of an action is, “Here's what's going to happen. Here are the two outcomes, and here are the probabilities of them.” Another thing you can do is say - it's a little bit more hand-wavy - but, “How much better is this going to make the world? How much does it matter if the world is better in generic diffuse ways?” Typically, EA has been pretty skeptical of that second line of reasoning—and I think correctly. When you see that deployed, it's nonsense. Usually, when people are pretty hard to nail down on the specific reasoning of why they think that something might be good, it’s because they haven't thought that hard about it or don't want to think that hard about it. The much better analyzed and vetted pathways are the ones we should be paying attention to.

That being said, I do think that sometimes EA gets too narrow-minded and specific about plotting out courses of impact. And this is one of the reasons why that people end up fixating on one particular understanding of the universe, of ethics, of how things are going to progress. But, all of these things have some amount of uncertainty in them. And when you jostle them, some theories of impact behave somewhat robustly and some of them completely fall apart. I’ve become a bit more sympathetic to ones that are a little robust under thoughts about what the world ends up looking like.

Political giving

Dwarkesh Patel 9:57

In the May 2022 Oregon Congressional Election, you gave 12 million dollars to Carrick Flynn, whose campaign was ultimately unsuccessful. How have you updated your beliefs about the efficacy of political giving in the aftermath?

Sam Bankman-Fried 10:12

It was the first time that I gave on that scale in a race. And I did it because he was, of all the candidates in the cycle, the most outspoken on the need for more pandemic preparedness and prevention. He lost—such is life. In the end, there are some updates on the efficacy of various things. But, I never thought that the odds were extremely high that he was going to win. It was always going to be an uncertain close race. There's a limit to how much you can update from a one-time occurrence. If you thought the odds were 50-50, and it turns out to be close in one direction or another, there's a maximum of a factor-of-two update that you have on that. There were a bunch of sort of micro-updates on specific factors of the race, but on a high level, it didn’t change my perspective on policy that much.

Dwarkesh Patel 11:23

But does it make you think there are diminishing or possibly negative marginal returns from one donor giving to a candidate? Because of the negative PR?

Sam Bankman-Fried 11:30

At some point, I think that's probably true.

Dwarkesh Patel 11:33

Continuing on the theme of politics, when is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level (like putting in early detection)? Or when is it more effective to fund it yourself?

Sam Bankman-Fried 11:47

It's a good question. It's not necessarily mutually exclusive. One thing worth looking at is the scale of the things that need to happen. How much are things like international cooperation important for it? When you look at pandemic prevention, we're talking tens of billions of dollars of scale necessary to start putting this infrastructure in place. So it's a pretty big scale thing—which is hard to fund to that level individually. It’s also something where we’re going to need to have cooperation between different countries on, for example, what their surveillance for new pathogens looks like. And vaccine distribution If some countries have a great distribution of vaccines and others don't, that's not good. It's both not fair and not equitable for the countries that get hit hardest. But also, in a global pandemic, it's going to spread. You need global coverage. That's another reason that government has to be involved, at least to some extent, in the efforts.

FTX Future Fund

Dwarkesh Patel 12:55

Let's talk about Future Fund. As you know, there are already many existing Effective Altruist organizations that do donations. What is the reason you thought there was more value in creating a new one? What's your edge?

Sam Bankman-Fried 13:06 There's value in having multiple organizations. Every organization has its blind spots, and you can help cover those up if you have a few. If OpenPhil didn't exist, maybe we would have created an organization that looks more like OpenPhil. They are covering a lot of what we’re looking at—we're looking at overlapping, but not identical things. I think having that diversity can be valuable, but pointing to the ways in which we intentionally designed to be a little bit different from existing donors:

One thing that I've been really happy about is the re-granting program. We have a number of people who are experts in various areas to who we've basically donated pots that they can re-grant. What are the reasons that we think this is valuable? One thing is giving more stakeholders a chance to voice their opinions because we can't possibly be listening to everyone in the world directly and integrating all those opinions to come up with a perfect set of answers. Distributing it and letting them act semi-autonomously can help with that. The other thing is that it helps with a large number of smaller grants. When you think about what an organization giving away $100 million in a year is thinking about, “if we divided that up into $25,000 grants, how many grants would that mean?” 4,000 grants to analyze, right? If we want to give real thought to each one of those, we can't do that.

But on the flip side, sometimes the smaller grants are the most impactful per dollar and there are a lot of cases where someone really impressive has an exciting idea for a new foundation or a new organization that could do a lot of good for the world and needs $25,000 to get started. To rent out a small office, to be able to cover salaries for two employees for the first six months. Those are the kind of cases where a pretty small grant can make a huge change in the development of what might ultimately become a really impactful organization. But they're the kind of things that are really hard for our team to evaluate all of, just given the number of them—but the re-grantor program gives us a way to do that. Instead, we have 10, 50, or 100 re-grantors, who are going out and finding a lot of those opportunities close to them, they can then identify those and direct those grants—and it gives us a much wider reach. It also biases it less towards people who we happen to know, which is good.

We don't want to just like overfund everyone we know and underfund everyone that we don’t. That's one initiative that I've been pretty excited about that we're going to keep doing. Another thing we've really tried to have a lot of emphasis on making the (application) process smooth and clean. There are pros and cons to this. But it drops the activation energy necessary for someone to decide to apply for a grant and fill out all of the forms. We’ve really tried to bring more people into the fold.

Adverse selection in philanthropy

Dwarkesh Patel 16:41

If you make it easy for people to fill out your application and generally fund things that other organizations wouldn't, how do you deal with the possibility of adverse selection in your philanthropic deal flow?

Sam Bankman-Fried 16:52

It's a really good question. It’s a worry that Bob down the street might see a great book case study that he wants and wonder if he can get funding for this bookcase as it’s going to house a lot of knowledge. Knowledge is good, right? Obviously, we would detect that pretty quickly. The basic answer is that we still vet all of these. We do have oversight of them. But, we also do a deep dive into both all of the large ones, but also into samplings of all the small ones. We do deep dives into randomly sampled subsets of them—which allows us to get a good statistical sense of whether we are facing significant adverse selection in them. So far, we haven't seen obvious signs of it, but we're going to keep doing these analyses and see if anything worrying comes out of those. But that's a way to be able to have more trusted analyses for more scaled-up numbers of grants.

Correlation between different causes

Dwarkesh Patel 18:06

A long time ago, you wrote a blog post about how EA causes are multiplicative, instead of additive. Do you still find that's the case with most of the causes you care about? Or are there cases where some of the causes you care about are negatively multiplicative? An example might be economic growth and the speed at which AI takes off.

Sam Bankman-Fried 18:24

Yeah, I think it’s getting more complicated. Specifically around AI, you have a lot of really complex factors that can point in the same direction or in opposite directions. Especially if what you think matters is something like the relative progress of AI safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those, and thus confusing impact on safety as a whole.

I do think it's more complicated now. It's not cleanly things just multiplying with each other. There are lots of cases where you see multiplicative behavior, but there are cases where you don't have that. The conclusion of this is: if you have multiplicative cases, you want to be funding each piece of it. But if you don't, then you want to be trained to identify the most impactful pieces and move those along. Our behavior should be different in those two scenarios.

Dwarkesh Patel 19:23

If you think of your philanthropy from a portfolio perspective, is correlation good or bad?

Sam Bankman-Fried 19:29

Expected value is expected value, right? Let's pretend that there is one person in Bangladesh and another one in Mexico. We have two interventions, both 50-50 on saving each of their lives. Suppose there’s some new drug that we could release to combat a neglected disease. This question is asking, “are they correlated?” “Are these two drugs correlated in their efficacy?” And my basic argument is, “it doesn't matter, right?” If you think about it from each of their perspectives, the person in Mexico isn't saying, “I only want to be saved in the cases where the person in Bangladesh is or isn't saved.” That’s not relevant. They want to live.

The person in Bangladesh similarly wishes to live. You want to help both of them as much as you can. It's not super relevant whether there’s alignment or anti-alignment between the cases where you get lucky and the ones where you don't.

Dwarkesh Patel 20:46

What’s the most likely reason that Future Fund fails to live up to your expectations?

Sam Bankman-Fried 20:51

We get a little lame. We give to a lot of decent things. But all the cooler or more innovative things that we do, don't seem to work very well. We end up giving the same that everyone else is giving. We don’t turn out to be effective at starting new things, we don't turn out to be effective at thinking of new causes or executing them. Hopefully, we'll avoid that. But, it's always a risk.

Dwarkesh Patel 21:21

Should I think of your charitable giving, as a yearly contribution of a billion dollars? Or should I think of it as a $30 billion hedge against the possibility that there's going to be some existential risk that requires a large pool of liquid wealth?

Sam Bankman-Fried 21:36

It's a really good question, I'm not sure. We've given away about 100 million so far this year. We're going to start doing that because we think there are really important things to fund and to start scaling up those systems. We notice opportunities as they come and we have systems ready in place to give to them. But it's something we're really actively discussing internally—how concentrated versus diffuse we want that giving to be, and storing up for one very large opportunity versus a mixture of many.

Great founders do difficult things

Dwarkesh Patel 22:15

When you look at a proposal and think this project could be promising, but this is not the right person to lead it, what is the trait that's most often missing?

Sam Bankman-Fried 22:22

Super interesting. I am going to ignore the obvious answer which is that the guy is not very good and look at cases where it's someone pretty impressive, but not the right fit for this. There are a few things. One of them is how much are they going to want to deal with really messy s**t. This is a huge thing! When I was working at Jane Street, I had a great time there. One thing I didn’t realize was valuable until I saw the alternative—if I decided that is a good trade to buy one share of Apple stock on NASDAQ, there's a button to do that.

If you as a random citizen want to buy one share of Apple stock directly on an exchange, it'll cost you tens of millions of dollars a year to get set up. You have to get a physical colo(cation) in Secaucus, New Jersey, have market data agreements with these companies, think about the sip and about the NBBO and whether you’re even allowed to list on NASDAQ, and then build the technological infrastructure to do it. But all of that comes after you get a bank account.

Getting a bank account that's going to work in finance is really hard. I spent hundreds, if not thousands of hours of my life, trying to open bank accounts. One of the things at early Alameda that was really crucial to our ability to make money was having someone very senior spend hours per day in a physical bank branch, manually instructing wire transfers. If we didn't do that, we wouldn't have been able to do the trade.

When you start a company, there are enormous amounts of s**t that looks like that. Things that are dumb or annoying or broken or unfair, or not how the world should work. But that’s how the world does work. The only way to be successful is to fight through that. If you're going to be like, “I'm the CEO, I don't do that stuff,” then no one's going to do that at your company. It's not going to get done. You won't have a bank account and you won't be able to operate. One of the biggest traits that are incredibly important for a founder and for an early team at a company (but not important for everything in life) is willing to do a ton of grunt work if it’s important for the company right then.

Viewing it not as “low prestige” or “too easy” for you, but as, “This is the important thing. This is a valuable thing to do. So it's what I'm going to do.” That's one of the core traits. The other thing is asking if they’re excited about this idea? Will they actually put their heart and soul into it? Or are they going to be not really into it and half-ass? Those are two things that I really look for.

Pitcher fatigue and the importance of focus

Dwarkesh Patel 25:51

How have you used your insights about pitcher fatigue to allocate talent in your companies?

Sam Bankman-Fried 25:58

Haha. When it comes to pitchers, in baseball, there's a lot of evidence that they get worse over the course of the game. Partially, because it's hard on the arm. But, it's worth noting that the evidence seems to support the claim that it depends on the pitchers. But in general, you're better off breaking up your outings. It's not just a function of how many innings they pitch that season, but also extremely recently. If you could choose between someone throwing six innings every six days, or throwing three innings every three days, you should use the latter. That's going to get the better pitching on average, and just as many innings out of them—and baseball has since then moved very far in that direction. The average number of pitches thrown by starting pitchers has gone down a lot over the last 5-10 years.

How do I use that in my company? There’s a metaphor here except this is with computer work instead of physical arm work. You don't have the same effect where your arm is getting sore, your muscles snap, and you need surgery if you pitch too hard for too long. That doesn't directly translate—but there's an equivalent of this with people getting tired and exhausted. But on the other hand, context is a huge, huge piece of being effective. Having all the context in your mind of what's going on, what you're working on, and what the company is doing makes it easier to operate effectively. For instance, if you could have either two half-time employees or one full-time employee, you're way better off with one full-time employee because they're going to have more context than either of the part-time employees would have—thus be able to work way more efficiently.

In general, concentrated work is pretty valuable. If you keep breaking up your work, you're never going to do as great of work as if you truly dove into something.

How SBF identifies talent

Dwarkesh Patel 28:30

You've talked about how you weigh experience relatively little when you're deciding who to hire. But in a recent Twitter thread, you mentioned that being able to provide mentorship to all the people who you hire is one of the bottlenecks to you being able to scale. Is there a trade-off here where if you don't hire people for experience, you have to give them more mentorship and thus can't scale as fast?

Sam Bankman-Fried 28:51

It's a good question. To a surprising extent, we found that the experience of the people that we hire has not had much correlation with how much mentorship they need. Much more important is how they think, how good they are at understanding new and different situations, and how hard they try to integrate into their understanding of coding how FTX works. We actually have by and large found that other things are much better predictors of how much oversight and mentorship they’re going to need then.

Dwarkesh Patel 29:35

How do you assess that short of hiring them for a month and then seeing how they did?

Sam Bankman-Fried 29:39

It's tough, I don't think we're perfect at it. But things that we look at are, “Do they understand quickly what the goal of a product is? How does that inform how they build it?” When you're looking at developers, I think we want people who can understand what FTX is, how it works, and thus what the right way to architect things would be for that rather than treating it as an abstract engineering problem divorced from the ultimate product.

You can ask people like, “Hey, here's a high-level customer experience or customer goal. How would you architect a system to create that?” That’s one thing that we look for. An eagerness to learn and adapt. It's not trivial to ask for that. But you can do some amount of that by giving people novel scenarios and seeing how much they break versus how much they bend. That can be super valuable. Specifically searching for developers who are willing to deal with messy scenarios rather than wanting a pristine world to work in. Our company is customer-facing and has to face some third-party tooling. All those things mean that we have to interface with things that are messy and the way the world is.

Why scaling too fast kills companies

Dwarkesh Patel 31:09

Before you launched FTX, you gave detailed instructions to the existing exchanges about how to improve their system, how to remove clawbacks, and so on. Looking back, they left billions of dollars of value on the table. Why didn't they just fix what you told them to fix?

Sam Bankman-Fried 31:22

My sense is that it’s part of a larger phenomenon. One piece of this is that they didn't have a lot of market structure experts. They did not have the talent in-house to think really deeply about risk engines. Also, there are cultural barriers between myself and some of them, which meant that they were less inclined than they otherwise would have been to take it very seriously. Ignoring those factors, there's something much bigger at play there. Many of these exchanges had hired a lot of people and they got in very large. You might think they were more capable of doing things with more horsepower. But in practice, most of the time that we see a company grow really fast, really quickly, and get really big in terms of people, it becomes an absolute mess.

Internally, there's huge diffusion of responsibility issues. No one's really taking charge. You can't figure out who's supposed to do what. In the end, nothing gets done. You actually start hitting the negative marginal utility of employees pretty quickly. The more people you have, the less total you get done. That happened to a number of them to the point where I sent them these proposals. Where did they go internally? Who knows. The Vice President of Exchange Risk Operations (but not the real one—the fake one operating under some department with an unclear goal and mission) had no idea what to do with it. Eventually, she passes it off to a random friend of hers that was the developer for the mobile app and was like, “You're a computer person, is this right?” They likely said, “I don’t know, I'm not a risk person,” and that's how it died. I’m not saying that’s literally what happened but sounds kinda like that’s probably happened. It's not like they had people who took responsibility and thought, “Wow, this is scary. I should make sure that the best person in the company gets this,” and pass it to the person who thinks about their risk modeling. I don't think that's what happened.

The future of crypto

Dwarkesh Patel 33:51

There're two ways of thinking about the impact of crypto on financial innovation. One is the crypto maximalist view that crypto subsumes tradfi. The other is that you're basically stress-testing some ideas in a volatile, fairly unregulated market that you're actually going to bring to tradfi, but this is not going to lead to some sort of decentralized utopia. Which of these models is more correct? Or is there a third model that you think is the correct one?

Sam Bankman-Fried 34:18

Who knows exactly what's going to happen? It's going to be path-dependent. If I had to guess I would say that a lot of properties of what is happening crypto today will make their way into Trad Fi to some extent. I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional market structure. Composable applications are super valuable and are going to get more important over time. In some areas of this, it's not clear what's going to happen. When you think about how decentralized ecosystems and regulation intersect, it's a little TBD exactly where that ends up.

I don't want to state with extreme confidence exactly what will or won't happen. Stablecoins becoming an important settlement mechanism is pretty likely. Blockchains in general becoming a settlement mechanism, collateral clearing mechanism, and more assets getting tokenized seem likely. There being programs written on blockchains that people can add to that can compose with each other seems pretty likely to me. A lot of other areas of it could go either way.

Risk, efficiency, and human discretion in derivatives

Dwarkesh Patel 35:46

Let's talk about your proposal to the CFTC to replace Futures Commission Merchants with algorithmic real-time risk management. There's a worry that without human discretion, you have algorithms that will cause liquidation cascades when they were not necessary. Is there some role for human discretion in these kinds of situations?

Sam Bankman-Fried 36:06

There is! The way that traditional future market structure works is you have a clearinghouse with a decent amount of manual discretion in it connected to FCMs. Some of which use human discretion, and some of which use automated risk management algorithms with their clients. The smaller the client, the more automated it is. We are inverting that where at the center, you have an automated clearing house. Then, you connect it to FCM, which could use discretionary systems when managing their clients.

The key difference here is that one way or another, the initial margin has to end up at the clearinghouse. A programmatic amount of it and the clearinghouse acts in a clear way. The goal of this is to prevent contagion between different intermediaries. Whatever credit decisions one intermediary makes, with respect to their customers, doesn't pose risk to other intermediaries. This is because someone has to post the collateral to the clearinghouse in the end—whether it's the FCM, their customer, or someone else. It gives clear rules of the road and lack of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on - to the FCMs that choose to make credit decisions there.

There is a potential role for manual judgment. Manual judgment can be valuable and add a lot of economic value. But it can also be very risky when done poorly. In the current system, each FCM is exposed to all of the manual bespoke decisions that each other FCM is making. That's a really scary place to be in, we've seen it blow up. We saw it blow up with LME nickel contracts and with a few very large traders who had positions at a number of different banks that ended up blowing out. So, this provides a level of clarity, oversight, and transparency to this system, so people know what risk they are, or are not taking on.

Dwarkesh Patel 38:29

Are you replacing that risk with another risk? If there's one exchange that has the most liquidity om futures and there’s one exchange where you're posting all your collateral (across all your positions), then the risk is that that single algorithm the exchange is using will determine when and if liquidation cascades happen?

Sam Bankman-Fried 38:47

It’s already the case that if you put all of your collateral with a prime broker, whatever that prime broker decides (whether it's an algorithm or a human or something in between) is what happens with all of your collateral. If you're not comfortable with that, you could choose to spread it out between different venues. You could choose to use one venue for some products and another venue for other products. If you don't want to cross-collateralized cross-margin your positions, you get capital efficiency for cross-margining them—for putting them in the same place. But, the downside of that is the risk of one can affect the other. There's a balance there, and I don't think it's a binary thing.

Dwarkesh Patel 39:28

Given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win? And if that's the case, then, in the long run, there won't be that much competition in derivatives?

Sam Bankman-Fried 39:40

I don't think we're going to have a single exchange winning. Among other things, there are going to be different decisions made by different exchanges—which will be better or worse for particular situations. One thing that people have brought up is, “How about physical commodities?” Like corn or soy? What would our risk model say about that? It's not super helpful for those commodities right now because it doesn't know how to understand a warehouse. So, you might want to use a different exchange, which had a more bespoke risk model that tried to understand how the human would understand what physical positions someone had on. That would totally make sense. That can cause a split between different exchanges.

In addition, we've been talking about the clearing house here, but many exchanges can connect to the same clearinghouse. We're already, as a clearing house, connected to a number of different DCMs and excited for that to grow. In general, there are going to be a lot of people who have different preferences over different details of the system and choose different products based on that. That's how it should work. People should be allowed to choose the option that makes the most sense for them.

Jane Street vs FTX

Dwarkesh Patel 41:00

What are the biggest differences in culture between Jane Street and FTX?

Sam Bankman-Fried 41:05

FTX has much more of a culture of like morphing and taking out a lot of random new s**t. I don’t want to say Jane Street is an ossified place or anything, it’s somewhat nimble. But it is more of a culture of, “We're going to be very good at this particular thing on a timescale of a decade.” There are some cases where that's true of FTX because some things are clearly part of our core business for a decade. But there are other things that we knew nothing about a year ago, and now have to get good at. There's been more adaptation and it's also a much more public-facing and customer-facing business than Jane Street is—which means that there are lots of things like PR that are much more central to what we're doing.

Conflict of interest between broker and exchange

Dwarkesh Patel 41:56

Now in crypto, you're combining the exchange and the broker—they seem to have different incentives. The exchange wants to increase volume, and the broker wants to better manage risk, maybe with less leverage. Do you feel that in the long run, these two can stay in the same entity given the potential conflict of interest?

Sam Bankman-Fried 42:13

I think so. There's some extent to which they differ, but more that they actually want the same thing—and harmonizing them can be really valuable. One is to provide a great customer experience. When you have two different entities with two completely different businesses but have to go from one to the other, you're going to end up getting the least common denominator of the two as a customer. Everything is going to be supported as poorly as whichever of the two entities support what you're doing most poorly - and that makes it harder. Whereas synchronizing them gives us more ability to provide a great experience.

Bahamas and Charter Cities

Dwarkesh Patel 42:59

How has living in the Bahamas impacted your opinion about the possibility of successful charter cities?

Sam Bankman-Fried 43:06

It's a good question. It's the first time and it’s updated positively. We've built out a lot of things here that have been impactful. It's made me feel like it is more doable than I previously would have thought. But it's a lot of work. It's a large-scale project if you want to build out a full city—and we haven’t built out a full city yet. We built out some specific pieces of infrastructure that we needed and we've gotten a ton of support from the country. They've been very welcoming, and there are a lot of great things here. This is way less of a project than taking a giant, empty plot of land, and creating a city in it. That's way harder.

SBF’s RAM-skewed mind

Dwarkesh Patel 43:47

How has having a RAM-skewed mind influence the culture of FTX and its growth?

Sam Bankman-Fried 43:52

On the upside, we've been pretty good at adapting and understanding what the important things are at any time. Training ourselves quickly to be good at those even if it looks very different than what we were doing. That's allowed us to focus a lot on the product, regulation, licensing, customer experience, branding, and a bunch of other things. Hopefully, it means that we're able to take whatever situations come up and provide reasonable feedback about them and reasonable thoughts on what to do rather than thinking more rigidly in terms of how previous situations were. On the flip side, I need to have a lot of people around me who will try and remember long-term important things that might get lost day-to-day. As we focus on things that pop up, it's important for me to take time periodically to step back and clear my mind and remember the big picture. What are the most important things for us to be focusing on?

Please share if you enjoyed this episode! Helps out a ton!



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Press play and read along

Runtime: 45m

Transcript

Speaker 1 Today on the Lunar Society podcast, I have the pleasure of interviewing Sam Beckman-Freed, CEO of FTX. Thanks for coming on the Lunar Society.

Speaker 2 Thanks for having me.

Speaker 1 All right, first question. Does the consecutive success of FTX and Alameda, does that suggest to you that the world has all kinds of low-hanging opportunities?

Speaker 1 Or was that a property of the inefficiencies of crypto markets at one particular point in history?

Speaker 2 I think it's probably more of the former. I think there are probably just a lot of inefficiencies.

Speaker 1 So I guess another part of this question is if you had to restart earning to give again, what are the odds you'd become a billionaire, but you couldn't do it in crypto?

Speaker 2 I think,

Speaker 2 I mean, I think they're pretty decent. Like,

Speaker 2 a lot of it depends on what I end up choosing and how sort of like aggressive I end up deciding to be. You know, there are a lot of pretty safe and secure kind of career paths

Speaker 2 before me that that definitely would not have ended there.

Speaker 2 But I think that if I'd sort of

Speaker 2 decided to really dedicate myself to starting up some businesses, there would have been a pretty decent chance of it.

Speaker 1 So that leads to the next question, which is that you've cited Will McCaskill's lunch with you while you were at MIT as being very important in deciding your career.

Speaker 1 He suggested you do earning to give

Speaker 1 by going to a quan firm like Jane Street. In retrospect, given the success you've had as a founder, was that maybe bad advice or maybe you should have advised you to start a startup or a nonprofit?

Speaker 2 I mean, I don't think it was literally the best possible advice in that, like, you know, I mean, that is what, 2012 or something, like, you know, think about starting a crypto exchange would have maybe been a, you know, but, but I think it was definitely helpful advice.

Speaker 2 And I think that, you know, relative to not having gotten advice at all then,

Speaker 2 I think it probably helped quite a bit.

Speaker 1 Right.

Speaker 2 But then there's a broader question of are people like you who could become founders, are they advised to take lower variance, lower risk careers that um in expected value are less valuable yeah i think that's probably true i think it probably people are advised too strongly um to go down safe career paths but i i think it's worth noting that first of all there's a big difference between what makes sense altruistically and personally for this and you know to the extent you're just thinking of personal criteria uh that's going to argue heavily in favor of a safer career path because you have much more quickly declining you know marginal utility of money than the world does um so this is sort of like specifically for altruistically minded people.

Speaker 2 The other thing is that

Speaker 2 when you think about like where or what is it that that sort of like is advising people to choose a safer route,

Speaker 2 I think people will often try and look to, oh, well, what was the career advice that they got?

Speaker 2 What was sort of like, you know, what were sort of these outward facing factors that you can see but i think often the answer has to do something with them and their family um or them and

Speaker 2 their friends or something much more personal. And when we talk with people about what they're thinking about doing with their career,

Speaker 2 personal considerations and the advice of people close to them weighs really, really heavily.

Speaker 2 on what decisions they end up making.

Speaker 1 So I didn't realize that the personal considerations were as important in your case as the advice you got from Ian.

Speaker 2 I don't think in my case, but I think that in the case of many, many people that I talk to, they are.

Speaker 1 So speaking of declining marginal consumption I'm wondering if you think the implication of this is that over the long term all the richest people in the world will be utilitarian philanthropists because they don't have diminishing returns from consumption.

Speaker 1 They're risk neutral.

Speaker 2 I mean I wouldn't say all will but I think there probably is something in that direction where people who are looking at sort of how they can help the world are gonna end up being disproportionately represented amongst the most and maybe least successful.

Speaker 1 All right, let's talk about effective altruism. So in your interview with Tyler Cowan, you were asked what constrains the number of of altruistically minded projects?

Speaker 1 And you answered, probably someone who can start something. Now, is this a property of the world in general, or is this a property of EAs?

Speaker 1 And if it's about EAs, then what do you think is about, is there something about the movement that drives away people who could take leadership roles?

Speaker 2 Oh, I think it's just the world in general.

Speaker 2 I think, you know, even if you ignore altruistic projects and just look at profit-minded ones, we have lots of ideas for businesses that we think would probably do pretty well if they were run quite well, that we'd be excited to fund.

Speaker 2 And the missing ingredient quite frequently for them is the right person or team to take the lead on it.

Speaker 2 And I think that in general, it's just, it's kind of brutal starting something. It's sort of brutal being a founder and it requires a somewhat specific but extensive list of skills.

Speaker 2 And I think that

Speaker 2 those things end up making it generally fairly highly in demand.

Speaker 1 What would it take to get more of those kinds of people to go into EA?

Speaker 2 Yeah, I mean, mean, I think part of it is probably just talking with them about,

Speaker 2 you know, have you thought about what you can do for the world? Have you thought about how you can have impact on the world? Have you thought about how you can maximize your impact on the world?

Speaker 2 And just sort of going down that path, I think a lot would be amenable. I think a lot would be excited about sort of thinking critically and ambitiously about how they can help the world.

Speaker 2 So I think honestly just engagement is one piece of this.

Speaker 2 And then another thing, I think, you know, even within people who are,

Speaker 2 you know, altruistically minded and looking at what would it take for them to be more excited to be founders or to be better at, I think there are still things that you can do.

Speaker 2 And I think some of this is about empowering people and some of this is about normalizing the fact that when you start something, it might fail and that's okay.

Speaker 2 And that, you know, that's how most startups. and especially most very early stage startups.
Obviously, this sort of changes over time, but

Speaker 2 that, you know, when you look at sort of early stage companies,

Speaker 2 you shouldn't be running them. You shouldn't be trying to build them to maximize the chances of having at least a little bit of success.

Speaker 2 But what that means is that you have to be okay with the personal fallout of failing, and that we have to build a community that is okay with that. And I don't think we have that right now.

Speaker 2 I think very few communities do.

Speaker 1 Now, there are many good objections to utilitarianism. As you know, you said yourself that we don't have a good account of infinite ethics.

Speaker 1 Should we attribute substantial weight to the probability that utilitarianism is wrong? And how do you hedge for this moral uncertainty in your giving?

Speaker 2 So I don't think it has super large impact on my giving, partially because in order to do so, you'd have to have sort of a concrete proposal for what else you would do and what that would imply that would be different, you know, actions wise.

Speaker 2 And I don't know that I've sort of been compelled by many of those.

Speaker 2 I do think, though, that there are a lot of things we don't understand right now. And I think one thing that you pointed to is infinite ethics.

Speaker 2 I think another is,

Speaker 2 and I'm not sure this is quite moral uncertainty, this might be physical uncertainty more so than anything else, but you know, there are a lot of sort of chains of reasoning people will go down that I think are like somewhat contingent on our current understanding of the universe in a way which

Speaker 2 might not be right. And certainly, if you look at like expected value outcomes, might not be right.
I think, you know, say what you will about like the size of the universe and what that implies.

Speaker 2 But like, you know, some of the same people who make arguments based on, well, here's how big the universe is, also,

Speaker 2 you know, think there's a, you know, think the simulation hypothesis has decent probability.

Speaker 2 But I think very few people sort of

Speaker 2 chain through them, like, well, okay,

Speaker 2 what would that imply? I don't think it's clear what any of this implies. I think in the end, if I had to say, like, how have these considerations changed my thoughts on what to do?

Speaker 2 The honest answer is that they have changed it a little bit. And I think the direction that they pointed me in is things with moderately more robust impact.
And what I mean by that is

Speaker 2 there's sort of one way that you can

Speaker 2 calculate the expected value of

Speaker 2 an action, which is sort of pretty specific and pretty much like, here's what's going to happen. Here are the two outcomes.
Here are the probabilities of them.

Speaker 2 There's another thing you can do, though, which is to try and say like, all right, like

Speaker 2 it's a little bit more hand-wavy, but it's something like, you know, how much better is it kind of, you know, going to make the world?

Speaker 2 Like, how much does it matter if the world's kind of better in like generic, diffuse ways? And I think typically, you know, EA has been pretty skeptical of that second line of reasoning.

Speaker 2 And I think correctly, because I think that usually when you see that deployed, it's nonsense.

Speaker 2 Like, usually, I think when served people are pretty hard to nail down on like what the specific reason is, they think that something might be good um it's because they haven't thought that hard about it um or don't want to think that hard about it and that you know the the much better analyzed and vetted pathways are the ones that you should be paying more attention to that being said

Speaker 2 i do think that sometimes ea gets too narrow-minded and specific about plotting out sort of like courses of of impact and this is one of the reasons why that people end up sort of fixating on one particular understanding of the universe of ethics, of how things are going to progress.

Speaker 2 But that, you know, all of these things have some amount of uncertainty in them. And when you jostle that,

Speaker 2 some

Speaker 2 sort of like theories of impact and some models behave somewhat robustly under jostling, and some of them completely fall apart.

Speaker 2 And I've become like a little bit more sympathetic to ones that are kind of like a little bit robust under thoughts about what the world ends up looking like.

Speaker 1 So in the 20 May 2022 2022 Oregon congressional election, you gave $12 million to Karek Flynn,

Speaker 1 whose campaign was ultimately unsuccessful. How have you updated your beliefs about the efficacy of political giving in the aftermath?

Speaker 2 Yeah, I mean, you know, it was

Speaker 2 the first time that I'd sort of, you know, given to that scale in a race.

Speaker 2 And, you know, I did it because he was, you know, of all the candidates in the cycle, the most outspoken on the need for more pandemic preparedness and prevention.

Speaker 2 You know, he lost, obviously,

Speaker 2 you know, such is life.

Speaker 2 And

Speaker 2 I think that, you know, in the end, there's some updates. I think lots of sort of miniature updates on efficacy of various things.

Speaker 2 But, you know, also,

Speaker 2 you know, I never thought that the odds were extremely high, that he was going to win. It was always going to be an uncertain, close race.

Speaker 2 There's a limit to how much you can update from a one-time occurrence.

Speaker 2 If you, you know, thought the odds were 50-50 and it turns out being close in one direction or another, there's sort of a maximum of maybe a factor of true update that you have on that.

Speaker 2 And so I think that there were a bunch of sort of micro-updates just on

Speaker 2 specific factors to the race. But I think on a high level,

Speaker 2 I don't think it's sort of changed my perspective on policy that much.

Speaker 1 But does it make you think there are diminishing or possibly negative marginal returns from one donor giving to a candidate because of the the negative PRA agree?

Speaker 2 At some point, yeah, I think that's probably true.

Speaker 1 So continuing on the theme of politics, when is it more effective to give the marginal million dollars to a political campaign or institution to make some change at the government level, you know, like putting in early detection?

Speaker 1 Or when is it more effective to just fund it yourself?

Speaker 2 It's a good question. And, you know, part of this is that it's not necessarily mutually exclusive.

Speaker 3 But

Speaker 2 I think one thing here is looking at what is the scale of the things that need to happen and how much are things like international international cooperation important for it?

Speaker 2 When you look at pandemic prevention, you know, we're talking tens of billions of dollars

Speaker 2 of scale necessary to start putting this infrastructure in place. So, it's a pretty big scale thing,

Speaker 2 which is hard to fund

Speaker 2 individually.

Speaker 2 And it's also something where we're going to need to have cooperation between different countries on

Speaker 2 what their

Speaker 2 surveillance for new pathogens look look like and on vaccine distribution, right? Like if you, you know, if some countries sort of

Speaker 2 have great distribution of vaccines and others don't, that's not good.

Speaker 2 It's both not fair and not equitable to the countries that end up getting hit hardest, but also in a global pandemic, it's going to spread. And so you need to have global coverage.

Speaker 2 And so I think that's another reason that government likely has to be involved, at least to some extent, in the efforts.

Speaker 1 Let's talk about future fund. So as you know, there are already many existing effective altruist organizations that do donations.

Speaker 1 What is the reason you thought there was more value in creating a new one? What's your edge?

Speaker 2 So, you know, part of it is I just think that there's value in having multiple organizations. Every organization is going to have its blind spots.

Speaker 2 And, you know, you can help cover those up if you have a few.

Speaker 2 And, you know,

Speaker 2 if OpenPhil didn't exist, maybe we would have created an organization that looks more like OpenPhil. But, you know.
There is some extent to which they are covering a lot of what they're looking at.

Speaker 2 You know, we're looking at overlapping, but not identical things. And so I think having that diversity can be valuable.

Speaker 2 But pointing to what are the ways in which we sort of intentionally designed it to be a little bit different from existing donors,

Speaker 2 one thing that I think I've been really happy about has been the regranting program that we've had. So we have a number of people who are experts in various areas.

Speaker 2 who we've basically donated pots to that they can regrant.

Speaker 2 And what are the reasons that we think this is valuable?

Speaker 2 One thing is just giving more stakeholders a chance to sort of voice their opinions in a way where we can't possibly sort of listen to everyone in the world directly and integrate all those opinions to come up with like the perfect set of answers.

Speaker 2 And so distributing it and letting them act semi-autonomously can help with that.

Speaker 2 But the other thing is that it really helps with large numbers of smaller grants. And so, you know, when you think about what an organization giving away $100 million

Speaker 2 in a year is thinking about um if we divided that up into 25 000

Speaker 2 grants right like how many grants would that that mean that would mean uh what like uh

Speaker 2 4 000 grants um which is a lot of grant to to analyze right like you know if we want to give real thought to each one of those we can't do that but on the flip side sometimes the smaller grants are the most impactful per dollar and there are a lot of cases where someone really impressive has a really exciting idea for a new foundation, for a new organization that could do a lot of good for the world and needs $25,000 to get it started, right?

Speaker 2 To like rent out a small office, to be able to cover salaries for two employees for the first six months.

Speaker 2 Those are the kind of cases where sometimes a pretty small grant can make a huge change. in the development of what might ultimately become a really impactful organization.

Speaker 2 But they're the kind of things that are really hard for our team to evaluate all of, just given the number of them.

Speaker 2 But the regranter program gives us a way to do that, where, you know, if instead we have, you know, 10, 50, 100, maybe eventually regranters who are

Speaker 2 going out and finding a lot of those opportunities close to them, they can then sort of identify those. and direct those grants.

Speaker 2 And it gives us a much wider reach and also biases it less towards people who we happen to know um which is which is good we don't want to just like overfund everyone we happen to know and underfund everyone that that we didn't happen to so i think that's been sort of one initiative we've had which i've been pretty excited about um and uh you know i think we're gonna we're gonna keep doing um and then you know i think another thing is what we've really tried to have a lot of emphasis on making the process smooth and clean um there are pros and cons to this but i do think that it sort of like drops the activation energy necessary for someone to decide to apply for a grant and you know fill out all the forms and things like that.

Speaker 2 And so we've really tried to bring more people in the fold, you know, in terms of potential recipients.

Speaker 1 If you make it easy for people to fill out your application, and if it's generally you're finding things that maybe other organizations wouldn't, how do you deal with the possibility of adverse selection in your philanthropic deal flow?

Speaker 2 It's a really good question.

Speaker 2 And of course, that's a worry that, you know, Bob down the street might like, you know, see a great bookcase that he wants and be like, oh man, I wonder if I can get funding for this bookcase.

Speaker 2 It's going to house a lot of knowledge. Knowledge is good, right?

Speaker 2 I mean, obviously, we would have not. That one, I think, we would detect pretty quickly.

Speaker 2 And I think the basic answer is that we still have that on all of these.

Speaker 2 And so

Speaker 2 we have

Speaker 2 oversight of them. But what we also do is we do really deep dives into both

Speaker 2 the large ones, but also into sort of samplings of all the small ones. We do some oversight of all of them,

Speaker 2 but we will do really deep dives into randomly sampled subsets of them, which allows us to get a pretty good statistical sense of whether we are facing significant

Speaker 2 adverse selection in them.

Speaker 2 So far we haven't seen obvious signs of it, but we're going to keep doing these analyses and see if anything... anything worrying comes out of those.
But that's sort of a way to be able to

Speaker 2 have more trusted analyses for more scaled up numbers of grades

Speaker 2 so a long time ago you wrote a blog post about how EA causes are multiplicative instead of instead of additive and we just talked about that a little while ago

Speaker 2 are do you still find that that's the case with most of the causes you care about or are there cases where some of the causes of all you care about are negatively multiplicative like an example might be economic growth and the speed at which AI is yourself yeah I think it's getting more complicated and I think that I mean specifically around AI you have a lot of really complex factors that point sometimes in the same direction, sometimes in opposite directions.

Speaker 2 And I think that especially if what you think matters is something like the relative progress of AI

Speaker 2 safety research versus AI capabilities research, a lot of things are going to have the same impact on both of those and thus confusing impact on

Speaker 2 safety

Speaker 2 as a whole.

Speaker 2 So I do think it's more complicated now. And I think it's not sort of cleanly things just multiplying with each other.

Speaker 2 I do think there are lots of cases where you see multiplicative behavior, but I also think there are cases where you just don't have that.

Speaker 2 And that, you know, sort of the conclusion of this is if you do have multiplicative cases, you probably want to be funding each piece of it.

Speaker 2 But if you don't, then you probably want to be trying to identify the most impactful pieces and specifically moving those along.

Speaker 2 And so I think, you know, our behavior should be different in those two scenarios.

Speaker 1 If you think of your philanthropy from a portfolio perspective, is correlation good or is it bad?

Speaker 2 I mean, like, I don't know, expected value is expected value, right? Like, and maybe here's like one way to think about this. Let's pretend that there is, you know, one person

Speaker 2 in

Speaker 2 Bangladesh and another one in Mexico. And

Speaker 2 we have, you know, one intervention that can, you know, we have two interventions, both, both 50-50,

Speaker 2 on saving each of their lives in particular, right? Some new new drug that we could help, you know, release to combat some neglected disease.

Speaker 2 And then there's this question of like, well, are they correlated? Like, are these two drugs correlated in their efficacy? And my basic argument is it doesn't matter, right?

Speaker 2 Because if you think about it from each of their perspectives, right?

Speaker 2 The person in Mexico isn't saying like, I only want to be saved in the cases where the person in Bangladesh is or isn't saved, right? Like, that's not relevant, right?

Speaker 2 They're like, I would like to live. And the person in Bangladesh similarly says I would like to live and you know you want to help both of them as much as you can

Speaker 2 and it's not super relevant whether you know there's sort of alignment or anti-alignment between the cases where you get lucky and the ones where you don't

Speaker 1 what's most likely reason that future fund lives fails to live up to your expectations I

Speaker 2 I think we just like kind of get a little lame. Like we give to a lot of decent things, but like all the cooler or like more innovative things that we do just don't seem to work very well.

Speaker 2 And we end up sort of giving the sameware, same place

Speaker 2 that everyone else is giving, wherever that ends up being. And that, you know, we're not, don't turn out to be effective at starting new things.

Speaker 2 We don't turn out to be effective at thinking of new causes or at executing on them.

Speaker 2 And,

Speaker 2 you know, hopefully we'll avoid that, but it's always a risk.

Speaker 1 So should I think of your charitable giving as a yearly contribution of a billion dollars or less or more?

Speaker 1 Or should I think of it as a $30 billion hedge against the possibility that there's going to be some existential crisis that requires a large pool of liquid wealth?

Speaker 2 It's a really good question. I'm not sure.

Speaker 2 You know, we're going to start giving some. We already have.
We've given away about 100 million so far this year.

Speaker 2 And we're going to start doing that partially because we think they're really important things to fund, partially because we want to make sure to start scaling up those systems and that process so that we're ready.

Speaker 2 And so that we notice opportunities as they come by and we have systems ready in place to give to them.

Speaker 2 But I think it's something we're really actively discussing internally: how concentrated versus diffuse we want that giving to be,

Speaker 2 and how much we want to be sort of storing up for one very large opportunity versus how much it's going to be sort of a mixture of many.

Speaker 1 When you look at a proposal and you think this project could be promising, but this is not the right person to lead it, what is the trait that's most often missing?

Speaker 2 Super interesting.

Speaker 2 There's,

Speaker 2 I'm going to sort of like ignore the obvious answer set, which are like, the guy's just not very good.

Speaker 2 Which, sure, fine. And maybe look at cases where it's someone who like

Speaker 2 is pretty impressive, but like, I still think is not the right fit for this.

Speaker 2 I think there are a few things. I think one of them is how much are they going to want to deal with really messy shit? This is a huge thing.
If you go to work for

Speaker 2 like, and maybe to give some examples, like when I was working at Jane Street, it's a really great place.

Speaker 2 I had a great time there. One thing which I didn't even realize

Speaker 2 was

Speaker 2 valuable there until I saw the alternative, you know, saw sort of what things could look like outside was, you know, if I decided that it was a good trade to buy one share of Apple stock

Speaker 2 on NASDAQ,

Speaker 2 I, you know, there's like a button to do that, right?

Speaker 2 If you, as a random, you know, citizen want to buy one share of Apple stock directly on an exchange, it'll cost you tens of millions of dollars in a year to get set up to be able to do that.

Speaker 2 Like, you got to get like a physical colo, maybe, like,

Speaker 2 in

Speaker 2 Secaucus, New Jersey.

Speaker 2 You have to have market data agreements with these companies. You have to think about the SIP and about the NBBO and whether you're even allowed to lift on NASDAQ right then.

Speaker 2 You have to build technological infrastructure to do it. But all of that comes after you get a bank account.
Let's even talk about that stuff.

Speaker 2 Getting a bank account that's going to work in finance is really hard.

Speaker 2 I probably spent

Speaker 2 hundreds, if not thousands of hours of my life trying to open bank accounts.

Speaker 2 And, you know, one of the things at early Alameda that was really crucial to our ability to make money was having someone very senior spend

Speaker 2 hours per day in a physical bank branch manually instructing wire transfers. And if we didn't do that, we wouldn't have been able to do the trade.

Speaker 2 And when you start a company, there's enormous amounts of shit that looks like that.

Speaker 2 Things that are like dumb or annoying or broken or unfair or not how the world should work, but it's how the world does work. And the only way to be successful is to do it, is to fight through that.

Speaker 2 And if you're going to be like, ah, whatever, like, I'm the CEO. I don't do that stuff, right?

Speaker 2 Then no one's going to do that at your company. It's not going to get done.
You won't have a bank account and you won't be able to operate.

Speaker 2 So one of the biggest traits that I think is incredibly important for a founder

Speaker 2 and for like an early team at a company, but that is not necessarily important for everything that you might want to do in life, is being willing to do a ton of grunt work if that's what's important for the company right then.

Speaker 2 and viewing it not as like low prestige or like too easy for you or something like that, but as whatever, this is the important thing. This is the valuable thing to do.

Speaker 2 So, it's what I'm going to do.

Speaker 2 That's one of the, I think, core traits.

Speaker 2 And the other one is: are they excited about this idea? Will they actually put their heart and soul into it?

Speaker 2 Or are they kind of going to be a little bit drifting and bored and not really into it and half-ass it? I think, like, those are two things that I really look for.

Speaker 1 How have you used your insights about pitcher fatigue to allocate talent in your companies?

Speaker 2 So,

Speaker 2 pitcher fatigue

Speaker 2 is,

Speaker 2 I haven't thought about this in a while, but my thesis back then, which I still think is probably true, is that when it comes to pitchers in baseball,

Speaker 2 there's a lot of evidence that they get worse over the course of the game. Just the more innings they pitch, like, they get worse and worse and worse.

Speaker 2 Partially this is just like, it's hard on the arm.

Speaker 2 But it's worth noting that the evidence seems to support the claim that it depends on the pitchers, but that in general, you're better off breaking up their outings.

Speaker 2 That like it's not just a function of how many innings they pitched that season, but also extremely recently.

Speaker 2 And so if you could choose between someone throwing six innings every six days or throwing three innings every three days, probably you should choose the latter.

Speaker 2 Probably that's going to get the better pitching on average and just as many innings out of them. And Fortsworth Baseball actually has since then moved very far in that direction.

Speaker 2 Like it, you know, average number of pitches thrown by starting pitchers down a lot over the last five, ten years.

Speaker 2 How do I use that in my company?

Speaker 2 Well, there's a metaphor here, but I actually think I've gone the opposite direction, if anything. And

Speaker 2 here's sort of what my sense has been in terms of

Speaker 2 computer work instead of like you know, arm, like physical work, is that

Speaker 2 you don't have the same effect whereby like,

Speaker 2 you know, your arm is getting sore and eventually your muscle snaps and you need surgery if you pitch too hard for two, like that, that sort of like doesn't directly translate.

Speaker 2 There's a little bit of an equivalent of this of people getting tired, right, and exhausted. But on the other hand, context is a huge, huge piece.
of being effective.

Speaker 2 Having all the context in your mind of what's going on, of what you're working on, what the company's doing makes it way easier to operate effectively.

Speaker 2 And if you could, for instance, have have two half-time employees or one full-time employee, you're way better off with one full-time employee because they're going to have way more context than either of the part-time employees would have and thus be able to work way more efficiently.

Speaker 2 And so in general, I think our experience has actually been that like concentrated work is pretty valuable.

Speaker 2 And that like if you keep breaking up your work, and whatever it depends on the person, the context, but like, In general, if you do that, you're never going to be able to do as great of work as if you really dove into something.

Speaker 1 So you've talked about how you

Speaker 1 experience relatively little when you're deciding who to hire. But in a recent Twitter,

Speaker 1 you mentioned that mentorship is, or being able to provide mentorship to all the people who come on, that's one of the bottlenecks to you going to scale. Is there a trade-off here where

Speaker 1 if you don't hire people for experience, then you're going to give them more mentorship and thus can't scale as fast?

Speaker 2 It's a good question. But to a surprising extent, we found that the experience of the people that we hire has not that much correlation with how much mentorship they need.

Speaker 2 That much more important is how

Speaker 2 they think,

Speaker 2 how good they are at understanding new and different situations,

Speaker 2 and how good they are and how hard they try to integrate into sort of their understanding of, you know, let's say coding, their understanding of how FTX works. And

Speaker 2 so I think that we actually have by and large found that like other things are much better predictors of how much

Speaker 2 oversight management and mentorship someone is going to need than their experience at sort of similar looking roles.

Speaker 1 And how do you assess that short of hiring them for them for a month and then seeing how they did?

Speaker 2 It's tough. I don't think we're perfect at it.
But things that we look at,

Speaker 2 do they understand quickly what the goal of a product is? And how does that inform how they build it?

Speaker 2 When you're looking at developers, I think we really strongly want people who can understand what FTX is, how it works, and thus what the right way to architect things would be for that, rather than sort of like treating it as like an abstract engineering problem divorced from whatever the ultimate product is.

Speaker 2 So being able to, and that's something that you, you can try and ask people like, hey, here's like a high-level customer experience or customer goal, right?

Speaker 2 How would you architect a system to create that?

Speaker 2 So that's one thing that we look for. Just an eagerness to learn.
and to

Speaker 2 adapt.

Speaker 2 It's not trivial to test for that, but you can do some amount of that. You can try and give people sort of novel scenarios and see how much they break versus how much they bend.

Speaker 2 And I think that can be super valuable as well.

Speaker 2 And

Speaker 2 also kind of like specifically searching for developers who are

Speaker 2 willing to deal with messy scenarios rather than wanting sort of a pristine world to work in.

Speaker 2 Because our company, it's customer facing, it has to face some third-party tooling. We've been been a quickly growing company.

Speaker 2 All of those things mean that we have to interface with things that are messy in the way the world is.

Speaker 1 Now, before you launched FTX, you gave detailed instructions to the existing exchanges about how to improve their system, how to remove clawbacks, and so on.

Speaker 1 Looking back, they left billions of dollars of value on the table. Why do you think that was? Why didn't they just fix what you told them to fix?

Speaker 2 Yeah,

Speaker 2 it's a really interesting question. And

Speaker 2 my sense is that

Speaker 2 part of a larger phenomenon where

Speaker 2 it's the right way to put it. Like, so, okay, one piece of this is just like, they didn't have a lot of market structure experts.

Speaker 2 Like, they just did not have the talent in-house to be able to like think really well and deeply about risk engines.

Speaker 3 Um,

Speaker 2 and also there are cultural barriers between, you know, myself and some of them, which I think probably meant that they, you know, were less inclined than they otherwise would have been to to sort of take it very seriously.

Speaker 2 But ignoring those factors, I think there's something much bigger at play there, where

Speaker 2 many of these exchanges had hired a lot of people. They'd gotten very large.

Speaker 2 And you might think that that meant that they got more able to do things because they had

Speaker 2 more sort of like horsepower.

Speaker 2 But in practice, most of the times that we see a company grow really fast, really quickly, and get really big in terms of number of people, it becomes an absolute mess internally.

Speaker 2 There's huge diffusion of responsibility issues, no one's really taking charge, you can't figure out who's supposed to do what. And in the end, nothing gets done.

Speaker 2 And you actually start hitting negative marginal utility of employees pretty quickly. Um, where the more people you have, the less total you get done.

Speaker 2 I think that happened to a number of them to the point where, like, yeah, I sent them these proposals. Where did they go internally? Who knows?

Speaker 2 You know, the like, you know, vice president of exchange risk operations, but not the real one, the sort of fake one operating under some department with an unclear goal and mission or something like that, who like had no idea what to do with it.

Speaker 2 And eventually just sort of like passed it off to a random friend of hers that she knew who was the developer for the mobile app and was like, you're a computer person. Is this right?

Speaker 2 And it's sort of like, I have no idea. I'm not a risk person.
And that's how it died.

Speaker 2 And I'm not saying that's literally that happened, but something sounds kind of like that probably happened, where it's just like, it's not like they had like,

Speaker 2 you know, people who took responsibility. They saw this like, wow, this is scary.

Speaker 2 I should make sure that the best person in the company gets this and pass it to the TTO and like person who thinks about their modeling and said, like, hey, is this thing scary?

Speaker 2 And they looked at it and they're like, wow, this might be a problem. I don't think that's what happened.

Speaker 1 Now, there's two ways of thinking about the impact of crypto on financial innovation. One is the crypto maximalist view that crypto subsumes stratify.

Speaker 1 The other is that what you're basically doing is you're stress testing some ideas from

Speaker 1 in a volatile, fairly unregulated market that you're actually going to bring to TradFi, but this is not going to lead to some sort of decentralized utopia.

Speaker 1 So which of these models is more correct? Or is there a third model that you think is the correct way to do it?

Speaker 2 First of all, who knows, right? Like, I mean, you know, who knows exactly what's going to happen? It's going to be path dependent.

Speaker 2 But if I had to guess, I would say a lot of properties of what is happening in crypto today will probably make their way into TradFi to some extent.

Speaker 2 I think blockchain settlement has a lot of value and can clean up a lot of areas of traditional market structure.

Speaker 2 And

Speaker 2 I think that composable applications are super valuable and are going to get more important over time.

Speaker 2 I think there are some areas of this where it's not clear what's going to happen. And I think that when you think about how do decentralized ecosystems and regulation intersect,

Speaker 2 it's a little bit TBD exactly where that ends up.

Speaker 2 And

Speaker 2 so, you know, I don't want to state with extreme confidence exactly what will or won't happen, but I think some piece of this

Speaker 2 seem pretty likely to me. I think stablecoins becoming an important settlement mechanism is pretty likely.

Speaker 2 And I think blockchains in general becoming a settlement mechanism and collateral clearing mechanism seems decently likely to me.

Speaker 2 And

Speaker 2 more and more assets getting tokenized seems decently likely to me.

Speaker 2 And there being programs written on blockchains that

Speaker 2 people can add to that can compose with each other seems pretty likely to me.

Speaker 2 And a lot of other areas of it, I think, could go either way.

Speaker 1 Let's talk about your proposal to the CFTC to replace futures commission merchants with algorithmic real-time risk management.

Speaker 1 There's a worry that without human discretion, you have algorithms

Speaker 2 that will cause liquidation cascades when they were not necessary is there some rule for human discretion in uh in these kinds of situations there is and the way i think about it is you have you know the the way that traditional future market structure works is you have a clearinghouse with a decent amount of manual discretion in it

Speaker 2 connected to fcms

Speaker 2 some of which use human discretion and some of which use automated risk management algorithms with their clients.

Speaker 2 And generally, the smaller the client, the more automated it is.

Speaker 2 We are inverting that to some extent where at the center you have an automated clearinghouse then connected to, you know, potentially connected to FCMs, which could potentially use,

Speaker 2 you know, discretionary systems when managing their clients.

Speaker 2 The key difference here is that

Speaker 2 one way or another, initial margin has to end up at the clearinghouse, a programmatic amount of it, and the clearinghouse acts in a clear way.

Speaker 2 And the goal of this is, first of all, to prevent contagion between different intermediaries.

Speaker 2 So whatever decisions, whatever credit decisions one intermediary makes with respect to their customers doesn't pose risk to other intermediaries because someone has to post the collaterals of the clearinghouse in the end,

Speaker 2 whether it's the FCM, their customer, or someone else.

Speaker 2 And so it gives clear rules of the road and lack of sort of systemic risk spreading throughout the system and contains risk to the parties that choose to take that risk on,

Speaker 2 you know, to FCMs that choose to make credit decisions there. So I think that there is a potential role for

Speaker 2 manual judgment,

Speaker 2 but manual judgment, it can be really valuable and add a lot of economic value. It can also be very risky when done poorly.

Speaker 2 And

Speaker 2 I think that

Speaker 2 in the current system, each FCM is exposed to all of the manual bespoke decisions that each other FCM is making. And that's a really scary place to be in.
And we've seen it blow up.

Speaker 2 We saw it blow up with Elmy nickel contracts, you know, and we saw it blow up in other cases, you know, with

Speaker 2 a few very large traders who had positions on at a number of different banks and, you know, ended up blowing out.

Speaker 2 So I think that this provides a level of clarity and oversight. and transparency to the system so that people know what risk they are or are not taking on.

Speaker 1 Are you replacing that risk with another risk, which is that if there's one exchange that has the most liquidity in futures, and

Speaker 1 if there's one exchange where you're posting all your collateral, so across all your positions, then the risk is that that single algorithm that the exchange is using is going to determine when and if liquidation cascades happen.

Speaker 2 So it's already the case that if you put all of your collateral with a prime broker,

Speaker 2 then

Speaker 2 potentially, whatever that prime broker decides, whether it's an algorithm or a human or something in between, is going to decide what happens with all of your collateral.

Speaker 2 And if you're not comfortable with that, you could choose to spread it out between different venues. You could choose to use one venue for some products, another venue for other products.

Speaker 2 If you don't want to cross-collateralize, cross-margin your positions, you get capital efficiency generally for cross-margining them, for putting them in the same place.

Speaker 2 But the downside of that is that the risk of one can affect the other one.

Speaker 2 There's a balance there. And I don't think it's a binary thing.
Okay.

Speaker 1 But given the benefits of cross-margining and the fact that less capital has to be locked up as collateral, is the long-run equilibrium that the single exchange will win?

Speaker 1 And if that's the case, then in the long run, there won't be that much competition in derivatives?

Speaker 2 I don't think it, I mean, you already could see that happening. I, you haven't, and I don't think we're gonna have a single exchange winning.

Speaker 2 Um, among other things, I think you know, there are going to be different decisions made by different exchanges, which will, you know, be better or worse for particular situations.

Speaker 2 And I think, you know, one thing that people have brought up is, well, how about for physical commodities, you know, like corn or soy,

Speaker 2 you know,

Speaker 2 like what would our risk model say about that? And the answer is it's not super helpful for those commodities right now because it doesn't know how to understand a warehouse.

Speaker 2 And so, you know, you might want to use a different exchange, which had a more bespoke risk model that, you know, tried to understand, you know, have a human understand what physical positions, you know, someone had on.

Speaker 2 I think that would totally make sense. And, you know, that can cause a sort of split between different

Speaker 2 exchanges.

Speaker 2 In addition, we've been talking about the clearinghouse here, but many exchanges can connect to the same clearinghouse.

Speaker 2 And

Speaker 2 we are already, as a clearinghouse, connected to a number of different DCMs. And so excited for that to continue to grow out.

Speaker 3 And

Speaker 2 in general, there are going to be a lot of people who have different preferences over sort of different details of the system and choose different products based on that.

Speaker 2 I think that's how it should work. And that people should be allowed to choose the option that makes the most sense for them what are the biggest differences in culture between jane street and ftx

Speaker 2 i think you know ftx is has much more of a culture of like you know

Speaker 2 morphing and taking on a lot of random new shit and jane street has it's still like i don't want to say it's like an ossified place or anything like it is somewhat somewhat nimble but it is more of a culture of like you know we're going to be very good at this particular thing on a time scale of a decade um and there are some cases where that's true with ftx because some things are clearly part of our core business for a decade.

Speaker 2 But there are other things that

Speaker 2 we knew nothing about a year ago and all of a sudden we have to get good at. And so I think that there's

Speaker 2 been more adaptation and

Speaker 2 it's also a much more public facing and customer facing business than Jane Street is, which means that there are lots of things like PR that are much more sort of central to what we're doing.

Speaker 1 Now, in crypto, you're combining the exchange and the broker. They seem to have different incentives.
The exchange wants to increase volume.

Speaker 1 The broker wants to better manage risk, maybe with less leverage.

Speaker 1 Do you feel that in the long run, these two can stay in the same entity given the conflict of interest or potential conflict of interest?

Speaker 2 I think so. And I think that

Speaker 2 there's like some extent to which they differ, but there are, I think, more extents to which they actually want the same thing and harmonizing them can be really valuable.

Speaker 2 And one is to provide a great customer experience.

Speaker 2 And when you have two different entities with two completely completely different businesses, but that have every order has to go from one to the other, right?

Speaker 2 You're going to end up getting sort of like the least common denominator of the two as a customer.

Speaker 2 You're going to get only things that are, everything is going to be supported as poorly as whichever of the two entities support what you're doing most poorly. And that makes it harder.

Speaker 2 Whereas, you know, by

Speaker 2 synchronizing them,

Speaker 2 it gives us much more ability to

Speaker 2 provide a great experience on that.

Speaker 1 How has living in the Bahamas

Speaker 1 impacted your opinion about the possibility of successful charter cities?

Speaker 2 It's a good question. I think it's, I mean, it's the first time, you know, I think it's updated positively a little bit.

Speaker 2 I think, you know, we've built out a lot of things here and that's been hopefully impactful. And I think, you know, it's made me feel like it is more doable than I previously would have thought.

Speaker 2 But it's also, it's a lot of work. Like, you know, it's a large-scale project.
If you want to actually, and we have not built out a full city.

Speaker 2 Like we've built out some specific specific pieces of infrastructure that we needed. We've gotten a ton of support from the country.

Speaker 2 And, you know, they've been very welcoming and there are a lot of great things here. And so this is way less of a project than just taking a giant empty plot of land and creating a city in it.

Speaker 2 That, that, that, that's way harder.

Speaker 1 How has having a Ram skewed mind influenced the culture of FTX and its growth?

Speaker 2 It's a good question.

Speaker 2 And I, you know, I think that what it means on the upside is that we've been sort of like pretty good at adapting and pretty good at understanding what the important things are at any time and at, you know, training ourselves quickly to be good at those,

Speaker 2 you know, even if it looks very different than what we were doing, you know, before.

Speaker 2 And I think that, you know, that's allowed us to, you know, focus a lot on product, to focus a lot on regulation and licensing, to focus a lot on customer experience, on branding and a bunch of other things.

Speaker 2 And I think hopefully it means that we're able to sort of like like take whatever situations come up um and provide sort of like reasonable feedback about them and reasonable thoughts on what to do um you know rather than sort of like thinking more rigidly in terms of how you know previous situations were on the flip side you know i i think that it means that you know i have to have a lot of people around me who will try and remember what the most you know, what the sort of like long-term important things are that might get lost day to day, you know, as we focus on, you know, things that pop up.

Speaker 2 And, you know, it's important for me to take time periodically to step back and, you know, clear my mind a little bit and just think, like, all right, let's try and just remember what the big picture is here.

Speaker 2 What are the most important things for us to be focusing on?