Auren Hoffman (00:02.498)
Hello, fellow data nerds. My guest today is Peter Thiel. Peter was the co-founder and CEO of PayPal, first investor in Facebook, co-founder of Palantir. He's also the founder managing partner of Founders Fund and the author of Zero to One, which I think is one of the best business books ever written. Peter, welcome to World of Desk. Now, when starting a company, how important is total addressable market?
Thanks for having me, Auren.
Well, it is always a necessary component. If there's no market at all, that's pretty bad. But there also are probably ways people can overstress on something like TAM. And I always think that if you have a TAM that's too big, it obviously obfuscates other questions, which may be more important. If you have a very big...
total addressable market, you probably have a lot of competition. And then that may create bigger challenges than having a small market or something like that. So I don't know, the restaurant business has an enormous TAM and it's a terrible business to go into. I think a lot of the energy business functions in a nearly commodified way, which is why it tends to be business that's very hard for startups.
to find a really differentiated niche. So I often think the best TAM narratives are ones where you have some sort of tight, fairly narrow TAM for the initial market and then there's some kind of expansion capability. When PayPal launched, our initial TAM were power sellers on eBay. It was like 20,000 or so.
people in early 2000 and we got to 30, 40% market share in about three months. And then, in a sense, it was a tiny tam, but you could get a foothold and you could sort of build a fortress there and then from that base expand. And that's...
Auren Hoffman (02:07.234)
And how do you know if you, if there's a way to expand from that base, is that just based on how talented the team is that based on like, okay, how important that, that base or that center point is.
Well, there are all sorts of intuitions. In a payments context, there would be all these payments people make on eBay, off eBay. eBay itself was growing. We would gain share on eBay. So there were sort of a lot of natural ways to expand from it. In the case of PayPal, it felt at the end of the day like it was hard to radically expand beyond eBay. And so in 2002, we sold the company to eBay. So it was a.
It was a very powerful initial time. It had a lot of expansion capability, but then crossing the chasm to the off eBay market turned out to be quite hard to do. And so then in the context of PayPal, it probably got us to an M&A exit.
Auren Hoffman (03:05.718)
When you think of creating monopolies, kind of very important, how do you think about that in relationship to the venture capital context? Outside of Y Combinator, it seems like the rest of the VC world is super competitive.
Well, yeah, I always think that if you think about monopolies or moats or businesses with margins, you can always think of a layer one, which is the companies themselves. And then a second layer is the financial investor layer. And ideally, as a VC, you want to invest in companies that are unique, that don't have too much competition, nature, beard, redden, tooth and claw.
but you also don't want to be competing too much with other VCs. And so, I don't know, you can analyze it in much the same way as companies. You know, the sort of what are the kinds of monopolies that are driven by brand, by network effect, sometimes by economies of scale, sometimes by unique technology. And the question is, are there analogous things like that for venture capital firms where there's some combination of a brand?
Founders Fund had a brand where we are founder friendly. We started companies ourselves around the side of the founder. That was a very differentiating brand. When we started back in 2005, fast forward to 2023, lots of other people are saying it. And so maybe, maybe it's a little bit more cluttered. You have sort of try to assess that. So obviously our network effects where your network to some companies and that gives you some sort of idiosyncratic deal flow. But then I think, yeah, I think, I think in practice.
it has to often be just reinvented on a deal by deal basis. And the poker analogy I always like to use, when you're investing in a company, it's always what do we understand about this that other investors do not understand? And if you don't know the answer to the question, then you're the sucker. It's like the poker one where you have to figure out who's the mark here at the table, and if you can't figure it out, it's you.
Auren Hoffman (05:03.618)
Then you're the sucker.
Auren Hoffman (05:15.127)
Now, a decade ago, you were definitely one of the first people to call out that science wasn't progressing as fast as it had in the past. How has that paired in the last 10 years since you've kind of been famous for making those statements?
Yeah, I think I started talking about the tech stagnation probably around 2008. And the claim is that it's been slow for, at this point, running on five decades, really since the 1970s, that we've had limited innovation in the world of atoms. You know, when I was an undergraduate at Stanford in the late 1980s, it was a mistake to major.
Auren Hoffman (05:34.986)
Okay, so even earlier.
in anything having to do with atoms, mechanical engineering, chemical engineering, nuclear engineering, aero-asteroids, terrible decisions. And then the only thing that really worked were the world of bits, computers, internet, mobile internet, software. To some extent, electrical engineering was still okay, but really computer science was the one place where there continued to be this cone of progress around computers and then we can debate.
how big that was, it certainly was very unbalanced. And it certainly, I think it led to some great companies. It's debatable how much it improved the GDP relative to what we'd seen say in the first half of the 20th century in the United States or the other developed countries. So I broadly think that we've had this broad stagnation in technology and science for.
for something on the order of 50 years.
Auren Hoffman (06:55.098)
Is the stagnation accelerating? Is it decelerating and now there's new wave? Like where do you feel we are in that stagnation continuum?
You know, I think in some ways we are roughly, we are roughly in the same place we've been for the last quarter century where it is, there's a decent amount of progress in the world of bits. We had enormous internet wave in the late 90s, maybe the late 90s were early, maybe we're now late in the internet, but that was...
that was a giant thing in computers. And we now have the start of some sort of real AI wave even though it's been talked about in some ways for decades. But I think the LLMs, chat GPT, it's probably a breakthrough that's, I would rank as on par with the internet itself. It is very big. And so I think in this world of computers, we can say that
you know, the progress is continuing at, you know, in fits and starts, but, you know, at still a pretty decent pace. And then it's everything else that's been much slower, much harder to invest in, you know, much harder for, yeah, much less than advertising, much further from the science fiction future of the Jetsons.
Auren Hoffman (08:32.33)
You made an argument in your piece on the new criterion that kind of wokeness is that smokescreen for the lack of scientific progress. Can you unpack that a bit?
Well, there's a lot, it was sort of an anti-woke, an anti-anti-woke argument I was making where I think there are endless debates we can have about DEI, wokeness, political correctness, multiculturalism, all these topics. And on some level, I think they're important. On some level, I would advocate for certain views on them. And I think the debates are important to have.
And then on another level, I've come to worry that so many of these debates are distractions. It's like a magic show where we're being hypnotized and we're paying attention to a certain debate. We're not seeing the man in the orange monkey suit jumping on the back of a baby or something like this. And that diversity is a diversion from more important things. And the more important things can be.
questions of economics, questions of science, question of religion, maybe even other political issues. The economics one, just to rattle down the list real quick, the economics one is just cultural Marxism. The Marxist critique of cultural Marxism is that when they started focusing,
on race and gender, they forgot about class, they forgot about the real economics. And then this probably, there's a Marxist or libertarian critique where you could say that, you know, we've had just runaway housing prices, and that's the real problem, we should be figuring out how to build more affordable housing. As long as we're talking about the other categories, you know, we're not even gonna be wrong, not even gonna be in the zone of dealing with that problem.
And then I think.
Auren Hoffman (10:35.423)
It used to be that like the downtrodden were agitating for maybe better wages or better working conditions. Now it seems like some of these movements are things about things that are other than economic could be some of the things you mentioned or the environment. Well, you know, is that is that also a smokescreen?
I don't know how conspiratorially you would get. There's a Marxist conspiracy theory of history where it was wokeness was a conspiracy by the corporations to divide the workers into race and gender and pay them less. And I think there are various companies that executed on something like that plan moderately well. There's Walmart in the 2000s was always in the doghouse because it wasn't paying its workers enough.
and they came up with the idea of rebranding themselves as a green corporation, and that sort of split the left-wing anti-Walmart alliance, and then in effect, it was cheaper to do a little bit of green stuff than to pay the workers more, and it was sort of a, it was probably good for the Walmart shareholders, and then was also, you know, in some ways, didn't really address some of these underlying economic challenges. And I suspect there's something like that that's also gone on with,
with the question of science has been very obscured. And so that the, you know, if you think of it in the context of the universities or the schools, the wokeness tends to be focused on the derangements of the humanities curricula and English or history or topics like that. Whereas if the thing that's really wrong is that the scientists aren't making any progress, they're not inventing new things.
You know, it's all sort of this corrupt, peer-reviewed research. It's incrementalism. It's sort of a stagnant, Malthusian, sociopathic institution. And to the extent that the sciences are, the humanities are distracting us from sciences, you know, we're not even paying attention to, you know, I'd be fine with a little bit of wokeness if we were finding a cure for dementia or doing other things like that.
Auren Hoffman (12:43.998)
And is there some sort of structural reason why science is not progressing fast enough? Is there something like we can or is it just like all these 1% things that all add up?
You know, I always think why questions are difficult. They're somewhat over-determined. There certainly is, there probably is some effect where certain fields have, you know, the easy things have been found and it's hard to find new things. It's probably very hard to find a new element on the periodic table, you know, or it's hard to, I don't, you're not gonna find, be like Christopher Columbus and find a new continent on this planet. So, you know, certain fields.
get closed and get fully developed over time. But I'm on the whole more inclined to sort of cultural explanations. It's not that nature has run out of things for us, but that there's something that's changed in our culture, that we've become too risk averse, that things have gotten bureaucratized, that, you know, I think one...
One dimension that I do think is a fairly important one in the 20th century is that there was a great deal of science and technology that was used in this sort of military context. And at some point, the scientists and the engineers are just building more powerful and more dangerous weapon systems. And already I think World War I, it's all this carnage, it's sort of a...
ambiguous is all the science really more good than bad for humanity. And then certainly in 1945 with Los Alamos and Hiroshima, it somehow I think tilts us into a somewhat darker direction. And there's something about the history of nuclear weapons where, in my telling, it's a sort of delayed response. It takes something like a quarter of a century for it to fully sink in. But by the end of the 20th century, it's been a very, very
By the early 1970s, it's like we can destroy the world 20 times over. What are we doing? Does this make any sense? And then, you know, maybe we shouldn't be funding the smartest physicists to build bigger bombs. Maybe we shouldn't be funding any of the scientists. Maybe they all need to be regulated. And it's kind of sad for these people to be, you know, puttering around with lots of grant applications and filling out all sorts of DEI forms, but maybe that's the price you have to pay to stop them from blowing up the world.
Auren Hoffman (15:20.234)
It does seem like some of this correlates with just like the moon landing, like, which is this like incredible feat. And then it just seems like things started to slow down, like right around that time. Um, is there, is there something that is there something in the psyche? Like, Oh, like it's kind of like mission accomplished. Like we can, we can just now slow down.
Yeah, look, the history is complicated. There were a lot of things that happened, but I think there was, it was possible to accelerate science and tech through centralization and government funding. So even the Manhattan Project, the New York Times editorial a week after Hiroshima, something like, if you left this to prima donna scientists who are decentralized, sort of an anti-libertarian argument, it would have taken them half a century to come up with.
Instead, the Army was just telling people what to do. It organized the scientists, and they were able to bring this invention in three and a half short years to the world. And then in a way, you were able to repeat this sort of centralized, coordinated, you know, pouring in lots of money approach with the Apollo program. And, you know, Kennedy, early 60s gives us the speech, and by...
the end of the decade, we have a man on the moon. But then I think the longer term cost was that you created these very large bureaucratic institutions. You no longer had the innovations that were coming in that you could then scale, and somehow it became politicized and it slowed down a lot. So there was some, I don't know, I'm not sure Faustian bargain, but some kind of a trade-off.
between you can accelerate one time, but then you get a scientific, it's like an agriculture, you can increase food production by having some monoculture, but then over time it's probably not the healthiest, not the healthiest ecosystem.
Auren Hoffman (17:19.954)
When you're dealing with atoms, there's a lot of safety problems. And if you think of even, do you think of early NASA, there's a lot of people that died, there's a lot of even test pilots on the side doing stuff. And is that like safetyism? Has that come into kind of slow innovation?
Sure. I mean, it was, you know, I think Yuri Gagarin, the first cosmonaut who circled the earth six, seven years, maybe a decade later, died in some test pilot. So, you know, so it's, so yeah, there was sort of a crazy amount of risk taking. And, you know, without
Yeah, there was something that shifted away from that. It just felt too dangerous. And it was too much risk of nuclear war. It was too much risk of environmental degradation. There were just too many crazy things people felt could happen. And I don't wanna dismiss these existential risks, but I do think that the trade-off is that we result in a society that...
was locked down, not just during COVID, but we've been in a soft lockdown for something like 50 years. And my bias is always we need to find some way to get out of the lockdown.
Auren Hoffman (18:49.942)
And how do you know when, like where to draw that line? Like if you think of like not long after some of the scientific progress started slowing, we mandated seat belts and then you mandate a bicycle helmets and you mandate helmets when you're skiing. And so, and, you know, many ways like these things are good. They protect people and stuff like that. So how do you know like where to, how far to go? There's some sort of like laffer curve of how far you go on the safetyism side. Right.
Yeah, it's always hard to articulate the kinds of places where it feels like we've gone too far or where it's gotten hijacked by various rackets. I think we've gone too far on the safety side with real estate, where it's runaway
And I'm looking out of my window here in Los Angeles, and it's all these office buildings that were built in the 60s, 70s, maybe 1980s are the most recent buildings. I can't, you know, I cannot see a single, I can't see a single crane anywhere on the horizon. It's just, and that tells me we've gone way too far in something as relatively important as real estate. And then I do think on the biomedical side,
which is an area that I've thought about a lot, and always think, and it always strikes me that we could be doing so much more. There's so many approaches that seem quite promising. And then it is, yeah, the barriers are just very, very high. And I think we're scared of the things that can go wrong, but we're not scared enough of.
the things that will go wrong if we do nothing.
Auren Hoffman (20:48.666)
Steven Pinker was recently on the podcast and one of his arguments that's probably most associated with him is the idea that we've been a broad positive trajectory since the Enlightenment. Like where does your understanding of history diverge from Pinker's?
Well, it's, and there's so much I think is not quite right about it. It's certainly one part of the argument. You know, I mean, obviously there's ways that things are better than 250 years ago. And you know, when George Washington had wooden teeth or something like that, that seems like at least one dimension of progress that we wouldn't wanna go back to. And I think.
you know, one wouldn't want to go back 250 years or even 100 years, just in terms of a lot of quality of life issues. I think it's a somewhat trickier question about the last 50 years. So I think something has kind of hit the wall in the last 50 years. And that's, you know, that's more, it's more ambiguous. I think the specific Pinker argument that I find very incorrect.
is just that the world's gotten more and more peaceful and that violence has gone down. And I always have this riff where, he's a psychology professor and he probably flunked chemistry and why he went into a field like psychology. And if you study chemistry or physics, there's this very basic thing that the total energy of a system is the kinetic energy plus the potential energy. And when you measure violence, you're just measuring the kinetic energy.
you know, how many bombs are being dropped, and it's going down. But the potential energy, you know, the number of nuclear bombs, the ability, you know, the potential energy, the potential destructiveness is way higher than it was 50 years ago. And so, if we look at that, it tells us, I'm not sure that we should be completely complacent. It's true that we're in this world where nuclear weapons have not been used since 1945.
Auren Hoffman (22:32.577)
I don't know why that's automatic. If the North Korean dictator does a video of a nuclear bomb nuking the Golden Gate Bridge in San Francisco, we treat this as a cartoon villain. And I'm not sure how we're supposed to deal with it, but I think we should be taking this stuff a little bit more seriously, and that these existential risks are very real. And this is also where I'm not...
I'm not a Luddite. I don't think we can go back. I don't think you can turn the clock back. But I think there are sort of all these other dimensions of existential risk. I think there's an AI dimension of existential risk that's where, I think Eliezer's, Yudkowski's gone kind of crazy, but his arguments are not that bad. And the people in Silicon Valley do not have great rebuttals to the existential risk of AI.
And there are environmental issues, not just climate change. There's sort of a lot of different environmental dimensions where there are serious existential risks. And we need to find some way to talk about them, not just to minimize them like Pinker does. We need to find a way to talk about them, but then not just shut everything down.
Auren Hoffman (24:22.302)
And the argument is, partly what you're saying is that, you know, we could be a Turkey the first week of November or something and not know what's coming or as a society.
Sure, well, that's what the existential risks are. That's what all those arguments tell you. And again, I think even something like the very strange nuclear deterrence thing, it's like maybe these nuclear weapons never get used. That's not the way nuclear deterrence was supposed to work. It was supposed to work. You were supposed to think about them all the time, and you're supposed to be scared, and then you didn't use them.
It sort of worked during Cold War from 1949 to 89. And then the last, you know, 33, 34 years, it's sort of like we've just gotten psychologically exhausted from it and we don't think about it anymore. But that's not really the theory of how this stuff is supposed to work. And yeah, maybe the US president can't actually launch a nuclear weapon. You know, I think JFK, LBJ, Nixon could. I suspect that if...
President Trump had said, you know, I want to push a nuclear button. I'm annoyed at the election result I don't think he could have actually done that but I'm not sure the nuclear weapons are completely unusable
Auren Hoffman (25:39.062)
Now you've also pointed out that we've seen both a decline in religion and science in the US over the last 50 years. And on the face of it, one would think maybe just that these things were opposed, but you have some sort of belief that they're linked.
Yeah, I don't know if I have a... I have a...
great theory on how they're linked, but certainly, let me say something about what I think has gone wrong with science, on the philosophy of science. I always think the thing that's tricky about science is you're supposed to fight a two-front war against excessive dogmatism and against excessive skepticism. If you're too dogmatic, you can't be scientific. This is certainly...
early modern science, 17th, 18th century, you're fighting the excessive dogmatism of the Catholic Church, and or, you know, certain views of, you know, Ptolemaic astronomy, or they were sort of like, and you needed, and, you know, a scientist was someone who thought for themselves and questioned excess dogmatism. And then on the other hand, you also cannot be overly skeptical as a scientist. You know, if I don't, if I don't,
trust my senses and if I think you might be, you might not be who you appear to be, or I can't, you might be a demon or an image or something like that. So it's Cartesian, human skepticism. There's some point where that's very bad for science. So you have to fight too much skepticism and you have to fight too much dogmatism and then somehow getting that balance right is...
Auren Hoffman (27:12.866)
is pretty hard. I would say that, I would say the history of science when it worked, it was certainly more on the anti-dogmatic side than the anti-skeptic side. It was some of both. My sort of rough qualitative sense, if we fast forward to 2023, is that it is 100% anti-skepticism. And so we are always fighting the people who are too skeptical.
the conspiracy theorists, the people, the people who do not believe the vaccine works, the people who are climate change skeptics, the vaccine skeptics, the stem cell skeptics, the Darwin skeptics. So it's all fighting skepticism. And then, whereas if you asked scientists, where is science too dogmatic today? Give us some fields where science is too dogmatic and less dogmatism would be.
would be good, I don't think they could say anything specific at all. And that fact tells us that it has become as dogmatic as the medieval church was. And then the only sort of anti-dogmatic characters are, maybe it's in a children's science book where it's a little girl who's exploring the world.
She is, she's not dogmatic, but that's, you know, it's, it's in children's books that we have the non-dogmatic scientists. You don't, you know, in grad school labs or something like that, where they're all regimented robots and we make sure that anyone who's even a little bit heterodox gets thrown off the overcrowded bus or something like that. So that's, that's sort of a, you know, that's sort of a model of what, what has gone wrong with, with science. And then maybe, you know, talk about religion.
Auren Hoffman (29:02.778)
Let's say the Judeo-Christian part in particular. You know, there's probably, I don't know, sort of one of the, maybe the biggest thing you can talk about is God. Maybe God is the biggest thing there is. And it's kind of a big difference whether or not God exists. And if we're in the society where
where we want people to be, you know, where we have, we sort of have peaceful coexistence by obscuring these big questions, downplaying big differences. Somehow the question of God's existence is almost too big for us to debate. And it's the kind of thing that...
that we don't really want to have too vigorous debates about because it's just, we can't have differences that big and have a peaceful society. And so there's sort of our, yeah, there's something, there's probably something like, it's not quite dogmatism, but there's something about the dogmatism of things we've agreed not to talk about or to think about that probably are, maybe it gets us peace.
But it's sort of, I think it's at the price of not thinking about some of the most important things or maybe at the price of something like a frontal lobotomy for both the scientists and the religious people.
Auren Hoffman (31:05.206)
I mean, 10 years ago, most of the people I knew in the AI community were militant atheists, but today it seems most of them are creationists. They think we're in a simulation. Like why has that shifted?
Well, I have a lot of different theories on this. Let me give two slightly different ones. I always, well, I think the new atheists, which is like slightly different group of people, but the Christopher Hitchens, Dennett Dawkins, that whole crowd in the early 2000s, there's Sam Harris and so on. I think they have somehow gone very out of fashion. And,
The read I have on that is that what they were doing in 2005, new atheism was a politically correct way to be anti-Muslim. And so you were, you know, God was this bad, violent being. And there's sort of ways you made it all purpose. You attacked all the different religions. But the real target was, you know, ISIS, Osama bin Laden, you know, all the crazy al-Qaeda.
all these sort of crazy Islamic terrorist groups. But it was done sort of in a politically correct way where it was somewhat narrow. But that felt like an argument that was badly needed in 2005. When you fast forward to 2023, the big geopolitical challenge for the West is not, you know, is not.
seventh century fundamentalist Islam, it's something like Xi Jinping thought and the CCP. And then this is sort of, well, it's sort of like a Borg. It's kind of this consensus thing where everybody believes what everybody else believes. And it's disturbing the structure of it. I'm not saying the content, but this structure of it is disturbingly close to the East Bay rationalists, to the sort of consensus scientific thinking. And so...
the new atheists had some very powerful things to say against Bin Laden. They don't have anything to say about why President Xi and the CCP are wrong. And so they are no longer relevant because they can't, they can't even engage with, you know, our biggest geopolitical or intellectual social challenge that the West faces. So that's sort of one bigger picture question. You know, I think within
I think within the AI context, there's sort of a, you know, the question about how this shift to the simulation theory of the universe happened is a little bit, it's sort of a little bit strange. You can always say that a simulation where the universe is not made of atoms, but of bits was just a, it was just a social status game where the computer scientists were beating up on the physicists.
The physics people like to deal with matter and energy and particles, and then the computer science people like to deal with zeros and ones and bits. And so if we shift from a multiverse to a simulation theory of the universe, that's somehow the computer science people, you know, beating up on the physics people. And you can think of it as interdepartmental rivalry of sorts. The...
Another, and I think these things can all be over determined, another explanation I have for why the simulation thing became so powerful was I think it was somehow deeply linked to the AI safety question. And, and I think the rough, you know, the, the rough logic. And it's not it's not airtight but the rough argument was that
If you were in a multiverse and you build this AI, and then there's a question as you build AI, AGI, super intelligence, will it be safe? How can you trust it? Will it be friendly to humans? And people were pretty optimistic about solving it, but it was already obvious in the early 2000s, there were some...
pretty big theoretical problems. If it's smarter than you, it might pretend to be friendly, it might fool you, it might not actually be friendly. If it's sort of a Machiavellian or Darwinian operator, it's never gonna be perfectly aligned with you. Its incentives may be divergent from those of human beings. So if you sort of model it as a Darwinian or Machiavellian actor, it's very hard to get to perfect alignment. So the sort of...
You know, there was a decent amount of optimism about the AI problem generally, because we were progressing in computers. So, you know, it seems like we'll eventually get to AI, AGI. There's a certain logic to that. But then the friendly version of that seemed a lot harder for these theoretical reasons, even in, say, 2005. And so, if your picture of the universe is a multiverse, then it's,
It's, you know, the AI or the AGI is in the future. And there is, you know, it seems unlikely it'll be friendly and it's, you know, when the singularity arrives, chances are we're all just going to die. If you're in a simulation theory of the universe where the simulation was designed, created by some super AGI being, then in some sense, the AGI is in the past.
and the compatibility of the AGI with humans seemingly was already solved. And so if it was solved once, it can be solved again. And so there's a way that the simulation theory worked as a partial solution to the very vexing friendliness alignment problem. I don't think it's perfect, but I think that's sort of roughly why as people,
Auren Hoffman (37:15.807)
Auren Hoffman (37:33.225)
Sort of a quasi-psychological explanation I would give is, as people were grasping with the difficulty of building a friendly AI, they grabbed onto the simulation theory as the everything's been solved already answered.
Auren Hoffman (37:55.278)
Going back to the science, I've heard you say before that when people name something science, it's generally less scientific, whether it's political science or social science, et cetera. Does that naming convention by itself also mean science helps science progress less quickly?
Um, yeah, well, it's, it's sort of.
Well, it's always a quite, it's sort of, if you're insecure enough that you have to call something science, then yeah, it's sort of like the gentleman does protest too much or it's sort of like, it's this thing that means the opposite. I think a lot of adverbs always mean the opposite, which is why you should be very careful to use them. So it's like, frankly, honestly, all these adverbs vary means a little bit. And so, the general good editing technique is to try to get rid of all
Auren Hoffman (38:42.903)
Auren Hoffman (38:46.734)
all adverbs in your writing because the default interpretation is that they often mean the exact opposite of what they say. And there's something like this with science. And I think it's certainly political science, social science, climate science. The strange one is in a way is computer science, which was, when I was an undergraduate at Stanford, it was the people who were not very good at EE.
or math went into computer science. And then it was this sort of much easier to get in, much easier to get an A, you know. And then it had this inferiority complex and that's why the field grabbed onto the science label.
Auren Hoffman (39:26.966)
It was much easier to get into than some of the other engineering majors.
Auren Hoffman (39:42.07)
Yeah, there's no, it's not called math science or something like that. Or yeah. Um, no, the, the U S federal government is now has 32 trillion or so in debt. Like, how do you think that impacts the investments the country needs to make?
Math is just hard.
Auren Hoffman (40:02.105)
Or how concerned are you about it?
And it seems like a very big problem. It's odd how it sort of has crept up on us. I think for decades, people were saying that, at some point, all the government debt would squeeze out money from the private sector, that you'd end up with.
you know, more and more of the government budget where we just interest on the debt or interest on the interest, there'd be some sort of runaway compounding. People were already making this argument in the 1980s in a sort of an anti-Reagan way. And then somehow the people who cried wolf, it felt like they were wrong for close to 40 years. And I think that one of the reasons, if you sort of look at the history of why.
why was the debt able to grow, and it seemingly didn't matter, is that we also had a bull market in bonds, and the interest rates steadily went down from something like 20% in the early 1980s to basically 0% after 2008, from 2008 to 2021. We had sort of this 13-year period or so of...
maybe a small hiccup, but mostly just zero rates. And in a zero rate world, you can add to the debt and the interest payments aren't that high. And so the annual cost of servicing the debt in 2021 was something like 1.6% of GDP, whereas in 1991, it was 3.2% of GDP. So I think that's sort of my explanation for how this thing...
Auren Hoffman (41:49.708)
Auren Hoffman (41:54.478)
And the interest rates were essentially significantly lower than GDP growth. So you had that kind of going for us as well.
Sure, but at this point, it feels like finally something broke. We're no longer in a deflationary context. The interest rates have spiked, the inflation has spiked, and I think we're headed for a very, very challenging decade. I worry that a lot of these arguments people made for 30 or 40 years.
have a lot of truth to them. And once the rates are above zero, the crisis is here and we have to figure it out. Probably one other thing that both helped and hurt the United States was that we were the reserve currency for the world. And so you could run bigger deficits than a normal country could. You could get away with it for longer. And then...
there's always a question whether, you know, at some point that means you're in a bigger hole in a crazier place, vis-a-vis things. I'm still, you know, when I look at the U.S. versus other countries, the conundrum I'm very struck by is that there are all these challenges in the U.S. and, you know, all these problems we face. And then it's very oddly still the case that...
almost everything else seems worse. And I don't know if that makes it stable and this can continue, but that's...
Auren Hoffman (43:34.842)
Is it just like relative matters? It's like the fastest person wins and even if we're much slower, like it doesn't matter or is it the absolute matters?
We're still the most dynamic society. We're the place where the innovations are still happening. I'd like there to be more, but it's striking how asymmetric it is. One metric I was looking at was companies started since 1990 that have market capitalizations over 100 billion. So brand new companies.
that have grown to being worth $100 billion or more. There are 17 in the world, 11 are in the US, six are in China, zero everywhere else. And so it's, and then this is just the, you know, the extraordinary failure of Europe, all these other places. And, you know, five, six years ago, there was some complicated debate of the US versus China. And, you know, if you were the,
Auren Hoffman (44:19.278)
CEO of one of these 17 companies. I mean, you'd be so much, so much rather be in the US than in China.
Auren Hoffman (44:46.87)
Now, we used to kind of sort society maybe by things like race or religion. And today it seems much more like we're sorting by political ideology. How does that affect society over time?
I mean, it's always so hard to know exactly how these things play out. What I'm always struck by, in some ways I think things are extremely polarized in the U.S. politically. In some ways, it always feels to me like the differences aren't very big.
If we sort of think about all the topics we've talked about, you know, where are the Republicans and Democrats really different on getting back to a faster tech innovation trajectory? And do they have a meaningfully different plan for doing this? Or even something as prosaic as, let's say, reducing the deficit. You know, maybe the Democrats are a little bit more taxes, the Republicans are a little bit more spending cuts, but is either party...
is there a meaningful difference in how much they will reduce the deficit or something?
Auren Hoffman (46:08.554)
Yeah, or more housing or just go down the list. Like they are pretty close together on all these things.
And so all the issues that we talked about today that I would argue are the truly important ones, I wonder whether the extreme polarization, hides the fact that there's so little differences. It's always the Shakespeare versus Karl Marx. Karl Marx people fight each other because they're truly different. In Shakespeare, they fight each other because there are no differences at all. And so it's like the opening line of Romeo and Juliet.
You know, two houses alike can dig into the Montagues and the Capulets and they hate each other. And there are these two families that, you know, aristocratic families, but they have no, there's no difference at all. And that's, that's sort of what, you know, I don't think it's 100% Shakespeare, but we're probably in a world that's, you know, 90% Shakespeare and maybe, maybe 10% Marx.
Auren Hoffman (47:04.394)
There's been a lot written about social mobility and decline in the US, maybe since the time you're talking about, since the 70s. Is that happening? Is it a problem?
You know, it's a problem. I think inequality is a problem. I think the lack of social mobility is a problem. I always anchor a little bit more on the stagnation generally. And so I think if the GDP grew at 3% a year, and even if the inequality was big,
and it was hard for people to move from blue collar to white collar jobs, I think everybody would be better off. And so I'm always more on this question of broad stagnation as the one to focus. And I understand people always think that's just a cop out for a rich person to talk about, but I do think that if we got 3% GDP growth,
these problems wouldn't be as important. And if we have zero, you know, 1% GDP growth, you know, these things will be very, very hard to solve and it'll be very zero sum, very contentious and probably won't be solved.
Auren Hoffman (48:35.102)
And the differences between, let's say the 50th percentile and the, you know, the, the one percentile in the U S is, it hasn't grown that much, but the difference between the one percentile and the 0.1 or the 0.01 percentile has grown quite a bit, like is there some sort of recipe for problems because of that? Like the keeping up with the Joneses at the top.
You know, there are... Let me think what to say about it.
I don't quite know, but if you say the 50th to the 1%, the 1% to the 0.0001% is the middle class versus the millionaires and then the millionaires versus the billionaires. And maybe...
I don't know, and then you can get into this very complicated tax policy debate on whether our policies too nice to billionaires, too nice to millionaires, about right, something like this. I think one way to describe the rough debate is that—
And the reason where I think it's sort of stuck, I'll try to make this fairly neutral, is I think the billionaires pay a lower tax rate than the millionaires. The millionaires pay ordinary income tax, 50% more of their income gets paid in tax in a place like California. The billionaires, let's say it's mostly capital gains taxes, those can also be deferred, you don't have to sell the stock right away.
maybe the effective tax rate for the billionaires is something like 15%, one five. And so the millionaires can always say, it's unfair that the billionaires are paying, you know, a lower tax rate. But then if you were to look at this from a government point of view, where let's say the government policy is to maximize revenues, you can, if you...
If you massively raise capital gains taxes, people just won't sell the stock at all. And the Laffer curve effect means that maybe the billionaire wealth goes down, but the government collects less revenue. And so I think we're much closer to the maximum tax on billionaires, whereas if you raise income taxes from 50% to 60% or 65%, probably the partners in law firms, all those people will just work harder. And so that's kind of...
That's kind of the weird policy conundrum that we have. My libertarian answer, which is probably way outside the Overton window, is that we should not have this regressive attack structure, but you just need to cut taxes massively on the middle class and the millionaires. And that's how you get to a non-regressive tax structure. But if you want...
If you want the government to collect more in revenues, you just need to make the structure more aggressive because the people who can pay are the middle, upper middle class and millionaire people. And so if you want a bigger government that collects more in revenues, it should look like Western Europe where the marginal rates are about the same, 50% on income. California, the 50% rate kicks in at a million dollars a year. In Austria, 50% kicks in at $70,000 a year. And that's how you get to...
you know, a larger government, you can do more redistribution, but it has the effect of yet reducing mobility.
Auren Hoffman (52:37.398)
Now, I love your quote that courage is in shorter supply than genius. Is that new, you think, to our society or has that always been the case?
It's always hard to calibrate, but I, there probably is, my felt sense certainly is that there's some degree to which, you know, I don't like the word contrarian, but heterodox thinking, thinking for oneself, you know, not deferring to the wisdom of crowds, things like that.
Um, is somehow, has somehow gotten, you know, gotten harder to do. And, you know, it's, and, um, then it was in, in our society 50, 60, 70 years ago. You know, there obviously are, you know, all these good and bad things about the internet, but, uh, but certainly one, you know, one thing that's, you know, that's at least somewhat problematic, somewhat troubling is that, uh, you know, any, anything that you put on the internet will stay there.
you know, we'll stay there forever. So you have to think really hard what kinds of ideas you're gonna put out. You know, I started one of these conservative student newspapers at Stanford, Stanford Review back in 1987. And it was a wildly heterodox, I mean, maybe obnoxious, you know, maybe mean, cruel, in your point of view. But it was, people wrote crazy things in that paper. And then,
Auren Hoffman (54:16.01)
You can date the exact point when it moderated and became much less that way. It was 2002, it's when they started posting the articles on the internet and people, they knew, it's everything you write, it'll be with you for the rest of your life and you have to dial it back accordingly.
Auren Hoffman (54:42.474)
Now you're kind of well known for conspiracy theories in some ways. Like what is a conspiracy that you believe that maybe people would be surprised that you believe?
Look, I believe so many of them we've got a client.
Auren Hoffman (54:53.346)
Well, I think there are a lot of things that, I don't know if they're, you can have sort of these emergent property as if conspiracies where it's not clear whether people fully know what they're doing, but they're acting as if it's a conspiracy. And I don't know, I think one category where very strange things happen are when you have,
when you have highly inelastic goods, where you change the price by the quantity by 1%, the price goes up 20% or something like this. And so when the US government settled with the tobacco companies in the late 1990s, it effect cartelized the industry. And then the government and the tobacco companies were on the side of massively raising prices. And then it went from.
and being these terrible things to this major source of tax revenue and these monopolies that just printed money like crazy because nobody who was not part of the settlement could sell tobacco. So it sort of had this strange cartilaginic effect. And I've been wondering whether there's sort of an ESG explanation of the major oil companies where if we take the...
the oil majors, let's say an oil again has this feature where if you increase, if you decrease supply of oil by 1%, price goes up 10%. So it has this highly inelastic feature of a hundred million barrels a day, take a million barrels offline, prices go up $10 a barrel from 80 to 90 or something like that, 10%, something like that. And so, and then this is sort of the intuition behind the OPEC cartel.
But now if you take the major Western oil companies, Shell, Exxon, Mobil, Chevron, BP, and if all the CEOs got together in a room and said, you know, we are all going to cut our oil production by 30% and then the prices will go up by more than that and our profits are going to go way up. And we're in effect going to...
collude with OPEC and extend the OPEC cartel to all these companies. And that would be a prima facie violation of Section 2 of the Sherman Act and all these people would go to jail for antitrust violations. But if instead of doing that, each of them hires an ESG consultant who tells them that they should take half their profits and invest them in solar panels and windmills.
don't you get the same result, especially if the solar panels and windmills don't really work. And so now I don't know if I believe the full conspiracy theory version of this, where ESG was, you have to think of it as a conspiracy by the oil and gas companies to raise prices, but the effective truth of it is, is that as the companies leaned into ESG policies,
Auren Hoffman (58:06.559)
Even though the specifics have had a very mixed track record, their share prices have gone up, their profits have gone up, and the market feedback has been to encourage them to do more of it, whether they understand that it's working for this precise reason or not.
Auren Hoffman (58:37.406)
This has been really interesting. Our last question we ask all four guests, what conventional wisdom or advice do you think is generally bad advice?
and I think almost all of it is. Because I don't know where to start. I don't know where to start. Because it's just the real advice is, I just don't believe you're sort of in this one size fits all cookie cutter thing where everybody needs to do the same thing. Or if there's some conventional advice that works for everybody, the fact that everyone's given it is not differentiating. And you know.
I'm not interested in things that are timeless internal truths. I'm interested in figuring out things that are one time, world historical. What is it that makes sense for me or you to do in 2023, right here, right now? And that's where whatever conventional bromides you get are never targeted for that. So it's all wrong.
Auren Hoffman (59:56.33)
And just to push back like on the, these timeless internal truths, like there are some that are probably extremely important in recognizing those as a society or what you would think would be important or you don't agree.
Yeah, but I think the things that matter and that we should be thinking about are, it's what's different about our time, what are the things, I don't know. It's probably not a good idea to go around killing people, but...
Auren Hoffman (01:00:25.73)
But I don't think that gets you very far in terms of coming up with a good plan for your life.
Auren Hoffman (01:00:44.362)
Yeah, yeah. Oh, this has been amazing. All right, thank you, Peter Thiel, for joining us at World of Dast. I'm a huge fan, so thank you again for joining us.
Awesome. All right, Auren, be well.
Auren Hoffman (01:00:53.878)
Alright, that was great.
Peter Thiel was the co-founder and CEO of PayPal, the first investor in Facebook, and co-founder of Palantir Technologies. He’s the founder and managing partner of the venture capital firm Founders Fund, and the author of Zero to One, one of the best business books of all time.
In this episode, Auren and Peter dive deep on venture capital, scientific stagnation, AI, tech start-ups, and more. Peter shares his compelling theory for why scientific progress has slowed down dramatically in the last decades, and explains how that’s affected start-ups and investing.
Auren and Peter also survey the global economic landscape and discuss why the US and China have outperformed the rest of the world's economies by such a wide margin. Peter breaks down the conclusions from his book The Diversity Myth and explains why “competition is for losers.”
David is a co-founder and partner at Craft Ventures, one of the foremost SaaS VCs. He was the former COO at Paypal and he’s invested in over 20 unicorns, including AirBnB, Facebook, Palantir, Uber, and SpaceX.
David and Auren discuss the state of SaaS in 2023 and consider how layoffs, seat contraction, and the broader macro environment could affect the ecosystem this year. David shares some advice on how companies should be positioning themselves to weather the downturn. They also talk about bubbles and cycles in tech, how to spot stand-out founders early, and which analysts have the best record on foreign policy.
You can find Auren Hoffman on Twitter at @auren and David Sacks on Twitter at @DavidSacks
Keith Rabois is the CEO of Openstore and a general partner at Founders Fund. He’s had an amazing career as a founder and senior operator at some of the most innovative companies of the last 20 years, including Paypal, Square, Opendoor, and LinkedIn.
Auren and Keith take a deep dive into spotting talent— one of Keith’s strengths— and discuss how he’s managed to hire so many incredible people over the years. They break down what makes a successful founder, why sitting on “too many” boards is the right move for VCs, and how Keith has reverse engineered a tech scene in Miami.
Keith has a highly unique perspective and brings sharp insight to everything from fundraising to culture to personal health.
You can find Auren Hoffman on Twitter at @auren and Keith at @Rabois.