Pro-Human Future: An Interview With Alex Epstein
Alex is a master writer, thinker, and advocate for humanity’s advancement through strategic use of energy.
Alex Epstein is the author of New York Times best seller The Moral Case For Fossil Fuels and Fossil Future (which can be pre-ordered here), philosopher, founder and president of the Center For Industrial Progress, former adjunct scholar at the Cato Institute, and former fellow at the Ayn Rand Institute, with words penned in papers of note such as USA Today and the Wall Street Journal, and is now publishing on his own highly recommended Substack. Alex is one of the greatest thinkers today on all matters relevant to human well-being, so subscribe to his Substack now and follow him on Twitter for more!
Tell us a bit about your background
I like to think what I've done in my actual, post-school life is a lot. I think my academic life is very unimpressive, but that was very deliberate. I went to Duke and I got my undergraduate degree in philosophy, and I minored in computer science, but I like to think that, had I gone to school 10 or 20 years later, I wouldn't have even gone to college, I would have learned on my own. I just find that, particularly today, when we have access to so much knowledge and to so many people, if you know how to be self-directed and introduce yourself to enough people who can give good feedback, the rate of learning is so far beyond what can happen in a school.
For example, having a company that's profitable is great, because I have a full-time researcher, and his basic job is to just learn everything possible about the relevant fields, and educate me, and correct me. For my book Fossil Future, I was able to pay to get help from a philosopher who I think is a better philosopher than I am, and he helped me refine my thinking.
I think that it's interesting that I get a certain kind of criticism that’s just, “Oh, you're only a BA in philosophy from Duke University? How dare you comment on energy?” I'm like: “I don't even really think of myself as that. I'm just self educated.” And I think we're moving to a world where, if you have really good arguments, and you can really give really good evidence, then that's what matters. And I had a sense of that when I decided not to go to grad school. I'm very pleased that the world is moving even more in that direction.
In my office there's a painting of Kobe Bryant, and one reason is that he was a year before me, age-wise, and he decided to go to the NBA. And I thought: “If I want to be the Kobe Bryant of ideas, does it make sense to go to grad school? Or should I just get and do the work and do it as well as possible?”
I did not want to lock myself into a conventional educational thing, but rather be self-directed. And the great thing about getting older is you get more experience, and now I’m a much faster learner than I was, just because I know how to learn and what to learn and what kind of feedback to get.
So I'm wondering if you would recommend the same path for younger people today, since they're maybe seeing fewer opportunities to learn and grow and succeed that follow directly from school.
I would say you have to have some sort of validation that the path you are going to pursue is going to lead you to the type of mind you want to have, and the type of output you want to have.
One concept I use a lot in my thinking, and when I talk about any kind of success, particularly intellectual success, is what I call “the pie of ability.” To have the kind of influence I want to have and do the kind of work I want to do, what combinations of abilities do I want to have?
For example, for me, I always was a pretty logical thinker; my tagline was always on platforms like logician or logician22. I always thought I had a good knack for that. But I was unbelievably uncomfortable speaking in public. So when I was at Duke, I made a decision when I was 20: “I'm going to run a philosophy club.” And the main reason was so that I could put myself in front of an audience and get over that discomfort, because I knew that if I wanted to do what I wanted to do, I had to develop that particular skill of public speaking as part of my “pie of ability.”
To develop your pie of ability you have to have some idea, some vision of what you want to be. And that vision has to be grounded in reality. I would ask people to think about and ask themselves: “Do I want to specialize in a particular topic? And if so, what's that and what would be interesting there?” And then also: “What level of mastery do I want to have?” And part of this you get from watching people. For example, I'm a huge Ayn Rand fan, and in college, I watched a lot of what are called Objectivist philosophers. And there were a couple of them who could answer questions so well, and it just blew me away. And that gave me a model. I knew that their minds were on a different level than mine in terms of their training, because I couldn’t answer questions at the time in the way that they did. But I had a model to follow. I find it super helpful to just look at the world and to see what you think is good in the realm of what you want to be in, and to have these different models. And then those models can help you formulate a vision.
And sometimes it's not even in your field. Like for me, with public speaking, my best models are people like Seinfeld, somebody who can just come into a room and blow you away. So when I think of my pie of ability, I usually find the best people in the relevant field. So not the best energy speaker, but the best speaker in general. I think having clear standards and looking out in the world and saying, “Hey, I want to be like this person, I don't want to be like that person,” in terms of how much they know.
But you might see some people answer questions about their field, and they don’t go very deep. If you ask them one or two “why?”s, it would just end, and you could see that they just have talking points. Whereas I wanted to be the kind of person where you just keep asking me “why?” forever and I'm going to have an answer. Or the question is going to be pretty deep and then I'm going to really clearly say I don't have an answer.
I'm saying all of that because when you're doing stuff on your own outside a traditional path, there is the advantage of being far more optimal, and also the hazard of being delusional. I'm trying to give what I've learned in terms of having what I think ended up being refined into a fairly optimal path, versus a delusional path because people do go on a delusional path. And just one final element of the delusional path that I think is notable is: if you're on a path where you take forever to do the actual work you claim you want to do, that is a big sign of being on a delusional path. For example, many people who say they want to be writers do everything but write on their way—they go to school, they'll read books. I remember, I was talking to one person early in my career, and I had my first professional article published when I was 20. And I think he was talking about my second article, and he said, “How did you do that, like you actually got paid to write?” I said, “Well, I wrote a lot. And I kept sending work, and they actually paid me.” And he said, “Well, what I'm gonna do is, I'm going to read all the Durant history books, I'm going to read all of them.” Which are insanely long, but he went on, “And then then I'm going to know so much, and then I'm going to write.” And I was thinking, you're never going to write, if that's your attitude. If you want to be a writer, part of having a non-delusional path is that you're actually doing it.
And I think a very comparable way to startups and products, which I have some experience in, is: if you're not on a path to creating anything resembling a product that's going to be in the public, and you're just sort of researching and training and learning skills, you're probably delusional.
So this leads well into my next question, which is: Why are positive angles so hard to see when dealing with anything with risk? Our general sensitivity to loss and misfortune outweighs our appreciation of both prospective and realized gains and I wonder—what about our psychology makes us this way and how did you manage to change this for yourself?
Really interesting question. I have a very clear answer in the realm of fossil fuels. And there are probably other answers, but I'll give you the answer from Fossil Future, which I have detailed very explicitly in the book. I did not fully understand this anywhere near the degree that I do now when I wrote The Moral Case for Fossil Fuels. So I think there are two things that are definitely going on that cause people to ignore the benefits of certain things, while then also catastrophizing the side effects.
To “catastrophize” means to take something that is either a manageable problem or a non-problem and portray it as a catastrophic problem. One thing I do, which you may have seen on Twitter, is that often I'll have a post with the hashtag #catastrophizing, and it's some designated expert in society making some prediction about the present that was obviously, totally wrong. With fossil fuels, and many other industrial phenomena, you see the catastrophizing and then the ignoring of benefits.
An example would be agriculture. And I give this in Fossil Future where Michael Mann, one of the leading climate scientists and activists, has a book called The Madhouse Effect. And he talks about fossil fuels and agriculture, but he only talks about the negative side-effects. So he only talks about, for example, that if we raise CO2, it'll be warmer in certain places and we’ll have to move the crops etc. And that's a legitimate thing to talk about. But he does not mention in the book that all of the machines that make modern agriculture possible, and also the fertilizer, depend on fossil fuels. So as I like to put it, “Fossil fuels are the food of food,” you cannot talk about fossil fuels intelligently without talking about the benefits, and yet, he does. And so why is this? Why do people do this? I think the two big things are values and assumptions.
When you're talking about something being a benefit, benefit is a term relative to values, it means beneficial, according to some goal. And that's my standard of value. So, for example, if I say animal testing for research has benefits, I'm talking about benefits to humans. Testing a pig for research doesn't have a benefit to the pig, it has a harm to the pig and has a benefit to humans. So if you see a scientist who says, you know, I'm against testing on those pigs, it could be that there's no real benefit to humans. But it also could be his standard is not the humans. It's something other than the humans. And I think with many of the animal rights people that their goal, and therefore the standard by which they're measuring things, is not human well-being or human flourishing. It's the alleged rights of the animals. So they're in effect saying, “I'm willing for humans to die for lack of medical research, because my standard is the rights of the pigs or the other things that might be tested.”
Whenever we find somebody ignoring the benefits of something to human life, when they're in a position to know about it, we should very strongly suspect that their moral goal and standard is not consistently or at all, human life. And I think with the energy issue, it's very, very clear once you start to think about it, that the operating standard is eliminating human impact, or you can think of it as an unimpacted planet. So for example, when we talk about climate, I talk about “climate livability”: how livable do we experience the climate. But it's usually talked about in terms of climate preservation, or avoiding climate change, and the goal is to not impact the climate. But that's not a human perspective, because in actuality, from a human perspective, we want to impact the climate in certain ways. And we're open to the idea that we didn't inherit a perfect climate in terms of 280 parts per million of CO2, this wasn't necessarily the perfect thing. And in any case, if we want a livable climate, we need a lot of energy to do things like irrigation and heating and cooling, etc. So from a human perspective, your goal is climate livability. But today, the goal is climate non-change, climate non-impact, and that's the deeper goal, eliminating human impact.
So my broad view of fossil fuels is that the leading thinkers don't consider the benefits, because they're not measuring benefits the same way, their ultimate goal is an unimpacted Earth. And so they're measuring it by, “How little does this impact the Earth?” So when they see fossil fuels, they don't see that this benefits 8 billion people, they say, “Hey, this is impacting the Earth and impacting the Earth is bad.”
The other thing that goes on, and this affects maybe even more people. is that there's this assumption that the side effects of fossil fuels, particularly climate, are going to be catastrophic. So it's one thing to say there’s air pollution, like there are certain obvious side effects of fossil fuels that are negative, that you don't want certain levels of, particularly historically of smoke from coal, those were unhealthy and all things being equal, you wouldn't want them. But of course they came with benefits, just as the caveman’s use of fire had smoke with it as well, but the benefits of the fire far exceeded the harms of the smoke.
There's always this assumption that there's going to be be a catastrophe. And I think that should cause suspicion because we know that we‘ve had 50 years of these predictions from (what I call) “designated experts” that we're going to have a catastrophe. So they say, “It’s going to be catastrophically cold, it’s going to be catastrophically hot, we're going to run out of resources really quickly. It’s going to get super, super polluted to the point where sunlight won't reach the Earth, it'll be hard to breathe.” And these are all predictions documented in Moral Case, and even more in Fossil Future. So, why does this catastrophizing happen? The answer is a second idea, which is what I call the “delicate nurturer assumption.”
This the view that the Earth, absent human impact, is delicate and nurturing. It is stable—so it doesn't change too much. It's sufficient—it gives us what we need. And then it is safe. And my point is, if you have this view, then you will expect human impact to destroy the “delicate balance” of the delicate nurturer, and then hell will reign upon us. This is how people think about climate, they think we’ve got this climate at 280 parts per million CO2, and that it’s this delicate balance, and everything depends on it. And if we increase the CO2 to 300, 320, 340, now 420 parts per million, that that's going to be a total disaster, and everything is going to crash. This is not a scientific view. It's a primitive religious view.
The real view that's true is what I call the “wild potential premise.” Nature is dynamic, deficient, and dangerous. It has amazing potential, but we need to harness it by transforming it. And if you have that perspective, it's very plausible that using fossil fuels to energize the whole world, to empower us, to use machines, is going to be really, really good, even if it causes the world to warm a degree or two. And in fact, you're open to the idea that a degree or two could be better, particularly because if you study the science, it happens more in cold places, which is where people tend to want more warming.
So I think these two ideas: 1) the goal of eliminating human impact versus advancing human flourishing, and then 2) this assumption that Earth is a delicate nurturer versus Earth as wild potential, that's what causes you to look at fossil fuels and to ignore the benefits and to expect that the side effects are going to be catastrophic.
I think there are other interesting issues with how people view risk in general, but to me the most interesting thing is: people are not risk averse with everything, right? Almost nobody ignores the benefits of everything, or catastrophizes the side effects of everything. So I think the reason people are doing so with fossil fuels has to do with that value and that assumption, versus some innate or some inherent tendency toward risk aversion, although that tendency may exist to some extent.
How do we convince people that human flourishing is worth more than anything else? Which I think it is, but it seems like a huge subset of our intellectual elite don't believe so.
The secret to promoting many good things is to define them clearly and to define their opposites clearly. So, with human flourishing, when people have an issue with that, it's often that it's positioned as being hostile to the rest of nature: “You just want everything to be a parking lot, you don't like animals,” this kind of thing. And so the key is to get the idea that what human flourishing means, with respect to the rest of nature, is that you want the healthiest possible relationship between human beings and the rest of nature, for humans.
For some parts of nature, I want to give them an artificially high chance at success. I have a dog named Sherlock, and I sort of move heaven and earth to make his life a success. He’s called a Klee Kai, it's a very cute little dog that looks like a husky and like, that's a totally unnatural creation in the first place that was designed so that human beings could have the fun of having a little wolf in the house. I am abnormally interested in that animal. And then a polar bear, I have a big issue with the fact that people care a lot more about distant polar bears than they do about millions of people who are poor and lack energy. Nevertheless, the polar bear is my favorite animal. Aesthetically, I think it's amazing in terms of how beautiful it is. I like to watch it. But I like to watch it under very circumscribed conditions, namely, where it does not have direct access to me. So that's again, it's not anti-polar-bear, I’m pro-polar-bear, but in a certain controlled context. But then, say a malarial mosquito, I just want to get rid of that.
You have to recognize that when you're looking at the world, and when you're deciding on actions, you have to decide which organism you are prioritizing. You can't be neutral, because organisms sometimes have harmonious interests, but they often have conflicting interests. And so, to say you're for human flourishing just means that when you're making these decisions, you're prioritizing the well-being of humans. And guess what, that leads to really enjoying nature as a beautiful place because that benefits us. I think what leads to people having an aversion to pursuing human flourishing is that it's portrayed as humans somehow bad being for everything about our environment, or about nature, that we like. And it's the exact opposite.
If you don't have a human flourishing standard, then you don't get to enjoy nature at all. Because you don't have any impact. You're very poor. You can't travel anywhere, you don't have time. And then you're like a primitive person who's living in the same place their whole lives and is just spending inordinate amounts of effort just getting enough food to eat to make it to the next day, and is meagerly protecting themselves against all sorts of natural dangers and being at the mercy of inanimate forces or even other animals ravaging them.
So I think it's about really clarifying the positive, and then contrasting it and showing that the opposite is really just a hatred of humans, because to say your goal is eliminating human impact, you're singling out the human race because you don't like its impact. People get really mad at me for the term I'm about to use. But I think they get mad because it hits home. What I say is like, the premise is, if your goal is eliminating human impact, the premise is that everything the human race does is bad. Everything we impact is bad. Bear impact is great, beaver impact is great, bird impact, all those things are great, but human impact is bad. So it’s a bias against the human race. And so I sometimes call this “human racism,” and people get so mad at that. But it is true. And it is bad. For a very similar reason that specific racism is bad. In a sense, it's worse because it’s against every member of our species. When you say, “Why is racism bad?” it’s because it dehumanizes people based on an arbitrary variable like skin color, and it's telling some people, “You don't get to be human.”
But “human racism” tells every human: “You are bad, and you're inferior to the rest of nature.”
One of the things that helped me a lot is, when I was 18, I really learned the philosophy of this. And I really realized that the modern environmental movement’s goal is eliminating human impact. And humans survive by impacting nature intelligently. I didn't know anything about fossil fuels, and I was still scared of global warming and that kind of thing. But I had a view that this was an anti-human movement, and that I wanted nothing to do with it. And that helped me then be more suspicious of some of the claims before I looked into them than other people might be.
So I have a sort of high level question, one I don't think you've been asked before. But there's this idea you've presented which I really like about the benefits of transforming our environment for the sake of human advancement. There are groups in the tech space which have the same goal of human advancement but who want to see its realization from the opposite direction, with humans drawing a compromise between our environment and ourselves by transforming our nature, causing ourselves to require less food, less water, less space, and less less energy overall; potentially halving what it takes for a human to be happy and indirectly making happiness more accessible to more people. The transhumanist way. I'm wondering if the transhumanist perspective is consistent with your broader framework of human flourishing?
Most movements I feel are very ill-defined. Like if you look at the political terms “liberal” and “conservative” in this country, I think they are incoherent blends of things, and so you don't really know much by hearing somebody self-identify as being one of those or the other. And for transhumanism, all I've seen are individuals saying, “Hey, I'm a transhumanist.” But I don't really know what that means.
What I can comment on philosophically, and you can ask this about anything you want to know whether it is pro-human or not: “Are they advocating this because they think it's going to be better for humans? Is it going to advance human flourishing?” So you could say, “Oh, if we only used less energy it would actually be better for us.” For example, people can say, “It's good to have a calorie-restricted diet because, if you look at all the animal studies, plausibly the number one variable that leads to longevity is lower calories.” Okay, that's one thing. So maybe there's some version of that, versus, if it's the view that humans are bad and our impact is intrinsically bad. And we just want to eliminate that and this is a way to do that. So this is a technique for eliminating our impact as an end in itself. Is it a means of advancing human flourishing, or is it a means of eliminating our impact? Or, as I would imagine, in some cases, a blend because often people are not clear about their goal, whether for unhealthy reasons or reasons of ignorance. I would be impressed if transhumanism were totally pro-human. It's very rare that there's a totally pro human movement in today's society.
I’m probably significantly older than you are. I was born in 1980. So I was brought up in anti-human environmental thinking. And I think kind of anyone from the mid-70s on was as well. So these two anti-human ideas I mentioned, the goal of eliminating human impact and the delicate nurturer assumption, those are embedded deep, to the point where even among really smart people I'm always thrilled when I meet somebody who doesn't have those assumptions.
I won't out anyone, but I was talking to a really famous scientist. And we've talked about these issues, and I was waiting, I thought: “It's gonna be a long discussion. It's gonna take me forever to explain this stuff.” But he just thought my position was the most obvious thing in the world. Like, “Yeah, of course, with climate, we're probably impacting it. But we can do all sorts of things to deal with any impacts. We're really sophisticated beavers.” He he had no concern about it. Because he understands that nature is wild potential. Human beings are good at mastering it, and we should master it. He didn’t impute some perfect significance to the Earth before humans, like he didn’t assume any of that environmental Original Sin, which is, you know, describing it religiously, which I think is a good way to describe it. But it's so rare not to have any anti-human ideas. And I certainly had a bunch of these anti-human ideas myself. I think any movement today that has a significant following is very likely to have some anti-human premise to it and some element of anti-human values to it.
I'm wondering now about the environmental activists you’ve mentioned. It’s almost cliche to ask, but, since they ostensibly care about the maintenance of our climate, why don't they ever support things like nuclear?
The position on nuclear is this incredible reveal that's very valuable to people who care about human life.
If you claim that the CO2 emissions from human beings' current use of energy is having this very adverse effect on the global climate system, if you cared about human life, you would care about energy. And why wouldn't you embrace this thing called nuclear energy that we have that, of all the other forms of energy besides fossil fuels, has by far the most documented track record of producing a lot of energy for a lot of people at low cost? And in the 70s, it was quite successful at doing this. And then you find that people who claim to care the most about CO2 emissions are the most hostile to nuclear.
It's not primarily the fossil fuel industry that's been anti-nuclear, though there are some shameful cases of that. And there's not even you know, Republicans, if you want to make it political, it's more the left and certainly the environmental movement. So Greenpeace, Sierra Club have been vehemently hostile to nuclear. So what's going wrong is the answer I gave before. It's that the leaders don't really value human life. Because if you really value human life, and if you know anything, you really value energy, and therefore, you'd be so eager for nuclear, so eager for hydro, and yet we see the opposite. And even with solar and wind, what we see is there's a lot of opposition to those, in terms of the opposition of the mining of it to the building of it, which takes up a lot of space, the transmission lines, etc. And the common denominator among all the opposition is, “This has a lot of impact.” So the underlying goal is eliminating human impact, not advancing human flourishing. So when the environmental thought leaders, when they see nuclear, what they see is, “Hey, we're impacting nature in this unnatural way.” They don't see it as, “Oh, all these human beings could benefit from energy, and that's amazing, because energy is amazing. And we want more people to have it with less pollution.” That's not the filter they're viewing it from, they're viewing it from the filter of, “How do we get as close as possible to an unimpacted Earth?” and for them, we shouldn't be splitting the atom and messing with nature and creating this waste that’s going to be around for a long time, which they consider terrible even though it's quite benign and we don't have significant problems with it. So it's all about the values. That's really the only thing that can explain the opposition.
I know you have a lot of connections in Silicon Valley. I think this is one of the wedge issues that's opening up people's minds to not only nuclear, but to a different way of thinking about fossil fuels. Because the opposition to nuclear is so irrational and so anti-scientific if you actually care about human life and energy. And people are starting to go: “Wait, maybe they don't actually care too much about human life and energy, those leaders.” Once you understand that the leadership is somehow corrupt in terms of values, then you open to the idea that maybe that Alex Epstein guy has a point about the benefits and side-effects of advancing human flourishing, and maybe this is actually good.
Who profits from green energy products when they’re typically not worth what they cost to make?
Green energy products are a subset of morally-promoted inferior products. Who benefits from a morally promoted inferior product? I would say if something is promoted in an artificial way, then the people who make it can make money, even if they're not creating value.
For example, we have way too many solar panels in California in terms of we produce way too much electricity during part of the day, and not even reliably because of clouds, and then, during other parts of the day, including night, they're not working. Solar panel owners get paid a premium through what's called net metering. They get paid twice as much as they should or more, and they're producing unreliable electricity. That's a case in which the people with the inferior product are just getting paid because of government manipulation.
Who else can benefit? Well, the politicians giving the favors, they can benefit. Then the other thing I think is very important is whenever you have one of these moral movements, people glom on to it for status. So often people just think of motivation by money. I think motivation by money is often very healthy. If you want to create value and benefit from it that's a deeply healthy motivation. But there are unhealthy motivations to get money. The one I fear the most is among people who already have money. They're seeking to do things that are totally unprofitable—for status.
I don't know the exact motivations, but a phenomenon that could be this is Jeff Bezos. He’s allocated $10 billion to climate issues. And a lot of that is to anti-fossil-fuel and anti-nuclear groups. For example, he gave $100 million each already to the National Resources Defense Council and World Wildlife Fund. There's clearly a lot of status in that for him, he clearly doesn't need the money. No marginal amount of money is going to make a difference to his lifestyle. But it can make a difference to status.
I should say, with Jeff Bezos, I'm a huge fan. I consider Jeff Bezos, in terms of his productive ability and achievement, one of the great heroes in the history of industrial civilization. But it's upsetting to me to see this great producer putting so much of the wealth he's created in the service of these destructive causes. And maybe there's some element of, “They’ll leave me alone.” There could be that, but part of it is just that this is the high status thing to do. And I think we all need to be aware of, when we're doing the thing we say is good, are we doing it because we really think it's good? Or are we doing it because we want to be seen as somebody who's doing something that's good, because we want the status of that? And that's always something human beings need to think about. And I think many times people are doing unprofitable things for status in a very unhealthy way.
What are some of your thoughts on CO2 emissions having some positive externalities like global greening?
This is an issue where many people, including myself, really have had their thinking distorted by what I call “the anti-human impact framework,” or anti-impact philosophy. The ideas that 1) our goal should be to eliminate human impact and 2) the Earth is a delicate nurturer that human impact inevitably destroys. This so true in the case of CO2, because if you look at CO2 clinically, even apart from the huge energy benefits that come with it, if you just heard, for some other reason, CO2 is rising, what is your view? And I don't think the natural view is, “Oh, my gosh, the world is gonna end,” at all.
You know, first of all, that the planet has had 15 times more CO2 than it has today, you know that temperatures have been up to 14 degrees Celsius, 25 degrees Fahrenheit, warmer than today. And those don't even correlate consistently. There's not even a direct correlation historically. And the world was much more tropical, as it tends to be when the world warms, it tends to be less cold in the cold places, not that the equator gets super-hot. And then you know, specifically, it's a warming gas. And many people prefer warmth, particularly, again, in colder places. And then CO2 is a greening gas, so it leads to more plant life. And in general, the periods where there's more CO2, particularly with higher temperatures, were very lush periods, where more of the Earth resembled the places that we pay to go on vacation. That's not to say there's nothing to worry about, because you can be worried about if there is a disruptive rate of transition from a less tropical state to a more tropical state. Like sea levels are a legitimate thing to worry about, though if you see the rate of their ascent I don't think you can be all that worried. On the high end, it's like three feet in 100 years, which is well within our adaptive abilities. And even that is quite implausible. I would say we're currently at about one foot a century.
So if you look at CO2 rising from a human perspective, there are many reasons to think it would be net-beneficial, even leaving aside the energy benefits that come with it. Now I'm saying that that's your baseline assumption. I think the jury is still out. I don't think anyone has proven one way or another whether the benefits of CO2 on their own exceed any harms. But almost nobody thinks about this, and it's considered ridiculous to think, “Oh, yeah, it would be better. It'd be warmer.” Like on one of my favorite shows, Curb Your Enthusiasm, Larry David is pretending to be a Republican, I think to get into a golf club. And he talks about global warming. In real life, his wife was Laurie David, who produced An Inconvenient Truth, the most influential piece of what I regard climate catastrophe propaganda. So he's really immersed in this stuff. And he says, “Oh, yeah, warming, wouldn't it be good?” This is him pretending to be an irrational Republican. But it's not even a bad argument. That, “Oh, yeah, it might be nice to be warmer.” So yeah, anti-human thinking, this view that human impact is immoral and inevitably self destructive, it totally distorts our thinking about CO2. So I don't know whether it's a net-benefit on its own. I know that fossil fuels are an enormous net benefit, but I think in a better system, you would have people really studying these benefits and really thinking about, “What would be the optimal level of CO2 from a human flourishing perspective?” not just: “What did we happen to inherit?” But what would we actually prefer?
My final question was inspired by something I saw while looking at a grants program created by Tyler Cowen and George Mason university. One of the application questions threw me for a loop by asking which mainstream opinion is one you actually agree with, which is a tough task when contrarianism has become a default heuristic for arriving at truth. And so I want to ask the same thing—what is one thing your opponents say that you actually agree with?
There's always that Peter Thiel question, which is the opposite of yours, which I have a lot of answers to. The question about things that most people would agree with that I disagree with.
My basic premise is that the philosophical framework people are thinking about this issue using is totally wrong. In particular, it's got the wrong goal. The goal should be advancing human flourishing but instead the goal is to eliminate human impact.
It's got the wrong view of Earth. It should be wild potential, but instead it’s delicate nurturer.
And it's got the wrong method of evaluation. It should be looking at both benefits and side effects carefully from a human flourishing perspective, and instead it's ignoring benefits and catastrophizing side effects.
I think it's obvious we should use more fossil fuels when you think about it the right way. And people think it's obvious we should eliminate them. I mean, that's huge opposition. Right? That's like, the biggest imaginable opposition.
So here's what I'll say about the mainstream. I do think that rising CO2 levels impact climate. And I like that the mainstream is, at least explicitly and apparently, valuing scientific expertise. I do think it is crucially important that we utilize experts to make decisions about things now. I have a whole discussion in Chapters 1 through 3 about how that's not done correctly, and how different institutions distort what experts think. And they draw false conclusions about action, and then they attribute it to all of science. But I think it's very important that we are a society that deeply reveres expertise, including scientific expertise. And I have a huge aversion when people act like, “Oh, experts are always wrong, I hate experts.” That's a very destructive view.
The thing is we need experts, but we need a much different way of utilizing them. And that's one thing I talk about a lot in Fossil Future. How do we have what I call a “knowledge system” where we can really learn from expert researchers without having their conclusions distorted. And so I do love experts, but I'm on a mission to change the way that we utilize them.
Stay up to date with Alex by subscribing to his Substack and follow him on Twitter for more!