Pathfinding With Tomas Pueyo: An Interview
Discussing men and therapy, culture and status, AI and consumer markets, and much more.
Tomas Pueyo is the former VP of growth at both Course Hero and Sigfig, author of The Star Wars Rings: The Hidden Structure Behind the Star Wars Story, a TEDx speaker, and creator of Uncharted Territories, a powerhouse publication that covers topics ranging from the Israel/Palestine conflict to OpenAI and Artificial Intelligence to much, much more. Tomas holds a masters in engineering from Ecole Centrale Paris, a masters in engineering from ICAI in Madrid, and an MBA from Stanford.
This was one of my favorite interviews I’ve ever done. Here’s a story that might illustrate why: Four years ago epidemiologists and virologists and immunologists alike were baffled by Tomas after he wrote an article on the need to take COVID seriously with an expert level of detail and precision, and without any prior medical training. Lockdowns weren’t yet in place at the time, the mainstream narrative on the virus still waffled back and forth between dismissal and real concern, and public sentiment remained unformed, lying in wait for expert consensus to turn the situation into something legible. Tomas was ahead of pretty much everyone by a mile.
He has a quality that I don’t think we have a word for, maybe we did in the past but not anymore. There’s a scene in my favorite show where a main character, living in a medieval-esque period, tells his brother that too many books have been written about great men and not enough about morons. Here in the future we’ve totally reversed that trend, with seemingly every book and paper on human psychology being about anxiety and insanity, and now we have no taxonomy of mental completeness and perfection to categorize Tomas Pueyo.
We’re left in the lurch, we’re on our own in defining something really cool that we rarely see. Maybe it’s genius as defined by Schopenhauer in The World As Will And Representation, maybe it’s genius as described by Goethe in Wilhelm Meister’s Apprenticeship, as something beyond an individual that lifts him up, maybe it’s something else. When you’re done reading this you’ll see what I mean, maybe you can help me define it. It’s a very unique experience, and we need to write more books about it.
I hope you enjoy the following conversation. And to read more from Tomas, subscribe to his Substack and follow him on X.
Sotonye: On why men don’t need or want talk therapy
On a recent re-watch of the Sopranos I couldn’t help but think about how the show is partly anchored around the tension between typical gendered-behaviors around mental health, which see women seek out care on average more often than men and with more treatment period retention, and Tony’s break from these expectations as a mob boss who goes to talk therapy. It’s striking and unusual for a man to put his mental world on a clinical platter, but I wondered—why? You have the best series on sex differences online so when I had this thought I immediately wanted to ask you: what are the evolutionary explanations for sex-based differences in internal problem resolution? What do men need if it isn’t talking?
Tomas:
People don't realize all the ramifications of one simple thing: How expensive eggs are.
Why do women have all the eggs they'll ever have at birth? Why don't they produce more through life? Because they're extremely biologically taxing. Eggs are orders of magnitude more expensive to make than sperm.
This difference is so huge that it's the basis of the differences between sexes. Females are literally defined as the individuals who produce the more expensive gametes—the eggs.
In most species, the egg carrier is also the gestator, which is yet another huge biological cost. And for all animals who rear their offspring, they're the primary caregiver because they are the ones who have the baby. Yet another huge source of biological cost.
What this means is that females are more biologically valuable than males, because they incur the most cost in having offspring.
Which means females have their reproductive success more or less biologically guaranteed unless something goes awry.
But not males. They must compete with other males to access females. Females get to be choosy, because they are the ones carrying the cost if they get the wrong guy. And what do they choose?
Biologically, we've evolved to survive and reproduce. So the woman will select the man who will give her as many children who will then survive and reproduce. That creates a lot of genetic requirements, like strong immunity, a great ability to gather resources, to defend them and the offspring, and commitment to stick around.
And since humans are very social, you have a shortcut to assess a lot of this: Who is the most dominant man? The dominant man shows intelligence and strength. If you're a physically weak man, a stronger man will overpower you. If you're not intelligent, a more cunning man will create an alliance to depose you. So a good mental shortcut for females is: "Let's let them battle it out. Whoever emerges as more dominant is the guy I like."
Of course, it's not the only mental shortcut. For one thing, dominance gives more opportunities to men, which then reduces their potential commitment. There are many other such mental factors when choosing a partner. But the attraction for dominance is strong, so much so that the most common (and unfulfilled) sexual kink in women* is to be dominated according to Aella.
A dominant man can't be weak.
This is the source of all the literature around honor and strength and not showing weaknesses and so on.
This is why men don't default to seeking help for mental health—I need help, therefore I am weak, and hence not dominant.
It's why Tony going to therapy is so surprising. It's not just a mobster. He's the head mobster. The most dominant, the one who is not supposed to show weakness.
Of course, many people might react to this as "But this is stupid! Going to therapy shows a man is mature! It's like going to the doctor!"
This is where most analysis on sex and therapy stops. How do you square that circle?
A decade ago, in grad school, I was in a leadership class where we were told to be vulnerable. A student asked: "If we're vulnerable, aren't we going to be seen as weak?"
The professor replied: "There is a fine line between vulnerability and weakness. The key is to be vulnerable on things that don't challenge your leadership."
Which for a man means: on things that don't challenge your dominance.
Imagine a CEO crying on TV. Which ones of the following statements are acceptable, and which ones aren't?
My child just died.
I have cancer.
The new law is killing our company and we will fight it to death.
My strategy failed and I couldn't manage the executive team.
The first 3 are acceptable ways to show vulnerability, because they don't challenge the CEO's leadership skills. The last one does, and so it wouldn't be acceptable.
I haven't seen more than 3 or 4 episodes of the Sopranos, but I assume this edge between vulnerability and weakness would be an interesting one to explore for a mafia boss, so I wouldn't be surprised if you told me they played with that boundary through the seasons. Maybe Tony explores the differences, learns to be vulnerable without appearing weak, and when the two are conflated, he reasserts his dominance with brutality?
*There's self-selection in her sample but I think it probably stands in the broader population.
Sotonye: On the true evolutionary underpinnings of creative production
I don’t think I’ve ever seen anyone predict the course of a story from first principles until now. You’re exactly right, that’s exactly what happens in the Sopranos. Tony slowly learns to accept the difference between admitting a problem or the need for help and being weak, but when the lines eventually blur he puts on an almost psychopathic, vicious display. This is super interesting and gets me to my next question about status and how it makes us do what we do.
Robin Hanson has this great idea called the Kings and Queens Theory of fertility, a pretty elegant model that predicts modern status seeking behavior and fertility decline. He says as a rule that as absolute wealth rises so does personal sense of relative status, and people with higher relative status make greater investments in accruing more status by engaging in things like poetry, sports, music, better education, and so on, and either delay or avoid having children outright due to opportunity costs to status. The theory says this explains the cultural and fertility trends subsequent to the wealth gains of the Industrial Revolution. But I’ve felt sort of off about this, I think a lot of culture can be explained as status seeking, but I’m not confident that evolution instilled such a general status seeking motive. Do we write poetry and make shows like the sopranos and invest more into going to good schools just for status, and If not, what are the actual evolutionary drives? And is status seeking really what’s really driving down fertility?
Tomas:
Ha! You should watch my TEDx! Stories, when well done, are engineered, so they can be reverse engineered. A mob boss going to therapy sounds like the result of a brainstorm: "How can we make a story about vulnerability? OK let's take the least vulnerable guy and force him to expose his vulnerability. What's the least vulnerable guy? The guy that has to appear the toughest? A head of the military? An MMA boxer? A head mobster! And how can he explore his vulnerability? This is usually in his internal voice. How can we make it explicit? Let's have him talk to a therapist!".
Note that with the same structure you could have an MMA champion have to develop his vulnerability to regain an emotional connection with his estranged daughter, and that would probably make a good story too.
I've touched on fertility, but I have taken a pause because it's one of the most complex social engineering problems I've faced, since we just don't know the cause of the fertility drop.
What makes it so complex is that it seems to be an inescapable trend that comes as economies develop, which means there's probably a strong underlying trend. Yet the obvious culprits all have counter-examples. For example, here are a few standard theories:
1. A stable, rural life enables lots of children because they are assets (working hands) rather than costs (raising them properly and educating them well is very expensive), so the ROI of child-making has turned from positive to negative.
2. People just want at least a couple of surviving children on average, but when mortality was so high, they had to overshoot and had plenty of children. This is not needed anymore, now that healthcare and sanitation eliminate this need
These theories sound true and probably are part of the solution. They explain many things, including some outliers like why Muslim countries have more children than their GDP/capita would suggest—because women are more home so the opportunity cost is lower. But they don't explain some other things, like why the Baby Boom existed, why people in developed economies still want more children than they actually have, why Georgia was able to revert the trend based on a cultural initiative, why France went through the fertility transition before the industrial revolution, why the UK (the most developed country in the 19th C) was among the latest European countries to go through the transition, or why there's been a recent inversion where rich people used to have fewer children than poor people but that's not the case anymore.
When you break down the pbm from first principles, the complexity becomes even more obvious. Fertility is the result of:
Average life expectancy of a mother up to menopause
Share of them that get into a long-term monogamous relationship (let's call it marriage)
Age at which they get into the relationship
Desire for children
Time between marriage and 1st child
Pregnancy frequency thereafter
Young children's mortality rate
Then do the same for women that don't get into a long-term monogamous relationship, either because they are serially monogamous or non-monogamous
Across each one of these bullet points, you have:
Biology issues (eg if marriage is too late and / or the 1st child is not conceived fast thereafter, fertility drops like a stone and we get infertility issues)
Economic issues (eg, the need for 2 breadwinners given the stagnant incomes of the middle class means fewer resources to raise children)
Logistical issues (eg, cars can only easily fit 2 car seats, current life demands many activities for children that don't scale, etc.)
Social issues (eg, expectation of having a career for women, expectation of not being a stay-at-home mum, expectation of living life to the fullest, etc.)
Technological issues (eg, contraception)
What makes this into a puzzle is that all these variables somehow must be at least partially interconnected, because otherwise you wouldn't have all the countries undergoing the same transition as soon as they start developing economically.
And that's why I haven't written much more about it. Processing all this information sounds my like a book-length endeavor where you dive into hundreds of papers and build your own datasets.
Short of that, you end up coming with theories that can only be partial at best. I assume that's why Robin Hanson's theory doesn't feel complete.
I highly respect Robin and he's right more often than not. I don't think he's wrong on this: Status is a very important game that's played all over the world. But the evolutionary point of status is making babies, not the other way around. It sounds to me like evolution would have tweaked that. It sounds highly unlikely that the major culprit for childlessness is that "somehow 80% of the population now is in this sweet spot where they have just the right level of wealth that they don't have children, gathering resources to try to get into the elite mode of making lots of children". Among other things, that sounds like an optimal strategy for some *men*, but less so for women
So I just think his theory is incomplete—like those of most other people I've talked with on the topic. And hence my hesitance to discuss it further.
Sotonye: On the future of the AI consumer market
Let’s shift gears a bit and talk about artificial intelligence. You’ve written comprehensively about the subject and present a compelling case about its dangers, the most compelling case I’ve read so far. I’ve been pretty interested in but still largely unconvinced by the perspectives of the most vocal opponents of strong general AI like Eliezer Yudkowsky and Geoffrey Hinton, but you’ve been able to bring a tempered clarity here that’s squared some circles around the matter for me. And so I’ve been wanting to ask something further:
Before we get AGI we’re likely to see more progress in the areas with the most commercial potential, but what this progress could and should look like is a hugely important but still unanswered question. What are your perspectives on the way the AI market will shape up in the near term? Will we see vertical integration with companies like OpenAi making fully AI powered phones? Or will sex bots become common? And what kind of products would you expect or like to see as a high-level creator?
Tomas:
I don't have fully formed opinions on the topic, so this might be a good time to think out loud.
It's not clear to me that there will be huge companies like Facebook or Google in AI.
These companies were the result of network effects, where the more users you had, the better the service became. This is true of all marketplaces, but I don't see it in AI. I see a big cost of entry to train the models, but it doesn't look like it's big enough to eliminate competition. There's already half a dozen competitors close to the cutting edge, with OpenAI, Mistral, Anthropic, Google Gemini, Meta... And odds are the training will get cheaper with better algorithms and training techniques. It also looks like Gemini Advanced is close to ChatGPT 4 in terms of performance, which suggests intelligence is an emerging quality of neural networks rather than something unique OpenAI did.
If you think about it, that makes sense. There are very few differences in our genetic code between other primates and humans. Odds our the differences are mostly just more layers of neurons, and maybe a few tweaks on how they work. But the basis is the same, so it looks like we live in a universe where intelligence is an emerging property of neural networks. I'm simplifying tremendously here, but all of this seems consistent.
If this is true, it will have lots of consequences we can foresee.
One is that we will reach AGI (Artificial General Intelligence) soon enough. Just looking at computing power, we should reach AGI in one to three decades.
It might be that we already have the necessary components, but we just didn't connect them properly. ChatGPT is extremely powerful, but it's just one module that takes words in and spits words out. It doesn't have modules for things like deciding its own goals or acting on them. So of course the intelligence it will show will appear limited! That's why I believe exploring AI agents is a heavily underestimated approach to reach AGI.
If it's true that AGI will be reached in our lifetimes, odds are the singularity will come around that time, and we'll get a superintellgence. We can't see beyond the singularity, so it's impossible to predict anything. But we can speculate what might happen afterwards, assuming the AGI is aligned and doesn't try to kill us all.
First, nothing else really matters.
The fertility issue? Solved by the singularity, since a superintelligence (let's call it ASI) can understand the problem (and solve it), design devices to make babies, educate them better than humans would do, build robots to take care of their needs...
Wars? Most of them are due to resource scarcity, but most of it disappears after ASI. Not enough food? Increase productivity. Not enough energy? Do nuclear fission or fusion or beam solar energy from space. Not enough raw materials? Mine them from space or transmute them.
Humans tend to focus on what mattered in the past, but that is becoming obsolete pretty fast.
Then there's the question of the interim. What will happen between now and ASI? I think productivity will explode, but it might not be seen in GDP data, because a lot of the explosion will be deflationary: Things that used to take lots of resources to make will suddenly take substantially less. In the short term, it will increase demand, but supply (productivity improvements) will be driven by AI while demand is mostly driven by human decisions, so odds are supply will outstrip demand and prices will shrink and industries will shrink.
The counterbalance will be that now it's much cheaper to create new companies and new markets, but those won't require as much resources to be built. We will see the first billionaire solopreneurs and many unemployed people. Of course, this means inequality will increase. But wealth will be geographically spread unlike in the past, and yet billionaires will be extremely mobile. The world has never seen this before. Odds are tax bases will crumble, there will be tax competition for these people, and they will be able to coordinate to influence politics in an advantageous way. I wonder if new city-states won't be built on the basis of catering to them.
Another thing that I assume will happen is that the fight for attention will increase several notches, so we will need AIs to buffer us. We already have AIs that protect us from spam. Soon, our personal AIs will filter the content we get exposed to, to only show what's most relevant. At the same time, we will be able to reach more people at a scale never seen before, so we will need filters for that. The cost of litigation might drop, so we might develop AIs that sue, countersue, and protect us from litigation without us even realizing it. Our AIs might crawl the Internet to learn about people we might be interested in meeting, contact them, or filter these contacts. In other words, the information overload will only be manageable with AI buffers between us and the world of information.
This is, assuming AIs can't build great robots. Odds are they will be able to, at which points humans won't be much different from AIs, and we'll get into a Blade Runner world where we won't know whether a person is human or robot. In such a world, most of our social needs will have an option to be solved by robots, and human experiences will just be a special version of that—special because it will remain scarce, not because it will be better.
An interesting analogy might be art. Up until the mid-1800s, paintings became more and more realistic. Then we invented photography, realism became completely devalued, and suddenly we have impressionism, cubism, and the like. A lot of their value is not as much the creativity, as the fact that a human did that art and not a machine. Something similar might happen with relationships, with the added complexity that impressionism might be creative, but odds are AIs will be more creative than humans.
Put in another way, we're entering a strange world.
Sotonye: On whether gains in business efficiency means a loss of creativity
So this is pretty huge. This has clarified a lot or the ambiguity over what AI “is” for me. If I’m understanding this the right way, the simplest way to think about AI is as a tool for adding gains to general efficiency. The past is a good leading indicator here and I think pretty much confirms this, we’ve seen the use-case of “dumb ai” follow this exact sort of efficiency promoting pattern. That pattern rarely gets mapped onto the future when we normally think about AI interestingly, maybe because current AI is so seamlessly diffused throughout the business process it’s sort of the air we breathe, no one sees it! But this future makes a lot of sense to me.
I’m wondering now about whether our future efficiency gains spell boon or bust for creative innovation and progress, and I’m trying to sort of reason about it through analogy: For example there’s a case to be made that the reason we get 1,000 Batman remakes everyday before sunrise and a new Apple tablet mini everyday after sunset may be less about a spiritual or other kind of Spenglerian decline, and more about businesses just working better, becoming extremely efficient. 90s Hollywood and 2000s Apple, without big efficient databases, may have left industry executives with only vague insight into the day to day of internal operations and finer details of outward markets, and product ideas may have been greenlit that would otherwise seem too risky. Does efficiency create a stagnant culture, or is Spengler right about a dearth of transcendent vision creating such conditions? I am seriously desperate for good new movies and I’m worried that the age of quality is behind us!
Tomas:
I fear your analogy might be misleading, and I'll tell you why in a moment. Instead, I would use the analogy of what you and I are doing now.
30 years ago, it would have been impossible, because creators like us were extremely rare. Why? Because bringing insights to the market had high production, transaction, and distribution costs.
To get distribution, you needed to physically print a paper and distribute it with vans, or emit a radio or TV signal. Since that's expensive, only a few did it, and they controlled the content.
The content itself was expensive too, because the production values required equipment and humans supporting the shows, or research and trips and phone calls from journalists and producers.
You needed agreements with payment processors, rev share agreements with different partners in the stack...
The result was that there was little content. Supply was lower than demand.
But now all these costs have been eliminated. Creating an article just takes one person's time with Substack. Creating a video takes one person's time with Tiktok or Youtube. And they can live off of that.
The result has been an explosion of supply. That's what reducing the marginal cost of production does.
With the explosion in supply, a few things have happened:
Now supply outstrips demand, and we're hitting a limiting factor that we had never hit before as a species: our attention. It's now precious. It's scarce. We have to be very cautious about how to use it, and this is not something we've evolved naturally to do.
When you create so much supply, the vast majority will be shit. But some will be amazing. It's the wild west, with lots of bad things happening but also gold rushes. In other words, the distribution of content quality will change, from something narrow but reasonably high quality, to a much broader distribution that includes lots of duds and a few pieces of gold. This is how you get people like Ben Thompson or Veritasium.
Social media fulfills a double function of crushing distribution costs but also as a filter for content quality
AI is going to follow this trend further. We are going to drown on supply, and most of it will be bad, but some of it will be exceptional.
This means we will need means to filter content quality. Social Media already fulfills that, but it's about to get attacked by this AI-generated content. Will we need other tools?
It also means we're about to enter a world full of weird content, where most of it is trash, but some of it will be the best content ever.
This is why my intuition tells me your analogy might be misleading. The reason why studios make 1000 Batman movies is because movie costs are so high. The stars, the filming crews, the editing, the CGI, the marketing, the distribution deals... Most Hollywood movies cost between tens and hundreds of millions. You can't get that wrong. So movie studios don't take risks, and they use things that are more likely to succeed. Superheroes have been doing that lately, simply because they have high household name recognition and people like that type of movie. More generally, that's why we have so many sequels. Lower risk is important. Making a new Apple device is extremely expensive, so they need to get it right. Adding one iPhone is a no brainer, it's a cow to milk.
AI will reduce production costs, and that will enable more risks, not fewer. Across content, but also across other parts of the economy.
A weird world beckons.
Sotonye: Final Questions
My final two questions for you are:
1—
You think and write at an insanely high level, how can we do the same? When someone is really good at thinking about things I always assume a Bayesian reason for it, I assume that they’ve just read all the right books on questions that come as natural concerns to most people, spoke to all the right people in those domains, and had the right real-world experiences, and as a result came to the conclusions they’re known for. Is that how we improve our reasoning, or should we skip the research and just try to acquire better broad intuitions? For example, I think about the former world chess champion Emmanuel Lasker often. He deviated pretty far from the way chess orthodoxy understood the game. Instead of it being something positional or strategic, he held it to be almost purely psychological, and this is a broad intuition I’ve ran with that has improved my game a lot more than the dozen chess books I’ve bought on Amazon. How can we raise our thinking to a higher level?
And
2—
My last question is about being Tomas Pueyo:
Your background is as far reaching as it gets. You’ve studied in three different countries, you’ve worked in tech on building out huge projects, you run the equally far reaching eponymous Substack. I want to know what the engine looks like under the hood of this here rocketship. What are your health, diet, and work habits like? And what pulled you toward writing after your time in tech? I’d like to end by going back to the beginning and learning about your origin story.
Tomas:
Thank you!
I have a realistic and an optimistic answer for you. Let's start with the realistic one.
I´m a firm believer in the fundamental attribution error: People think their wins are theirs and their failures due to bad luck, and tend to think the opposite for other people.
A lot of what I do well is due to luck. In my case, I'm really not very special. I had a boringly optimal upbringing, in an upper-middle stable family that loved me and helped me study and travel.
My daily habits today are not that special. I wake up, prep the kids for school, eat something healthy (egg whites with tuna usually), work for 5-6h on my computer, go to the gym, eat (again, healthy: normally salad, protein, and a bit of carbs), work 4-6h more, have dinner, take care of the kids, go to sleep, where I sleep 7-8h when I don't have anxiety, and 5-7h when I do. I am not biography material.
The weird thing my brain does is that it's in constant analytical mode. It is always trying to understand why things are the way they are. It never stops, I never tire of it, I enjoy it, and I quickly get bored when I can't do it—for example, when enough of my attention is caught on something I can't deconstruct. This is not really something I've chosen. It just is.
The consequence is that, you're right, I am constantly reading. Not just books. In fact I read many more scientific papers than books. Maybe 3-10 abstracts a day on average. I also consume content on Twitter or newsletters. I avoid newspapers as they focus on the short term, shallow, narrative-driven explanations rather than long-term, deep, systems-driven ones.
So most of the time, when I tell you something, I'm channeling other people I have happened to read. In our exchange I've channeled people like Robin Hanson, Rob Henderson, Aella, Ben Thompson, Guillaume Blanc, Joseph Campbell, John Yorke, Tim Urban, and dozens more. A lot of creation is recombination anyways, so by reading a lot, you can create a lot.
The desire to be constantly analyzing drives my curiosity, and this might be the one aspect that can be replicable: I really lean into it. So when something interests me, I drop everything else and I just focus on it. If I don't sleep for days, if I fail at other commitments, or if I'll be late on all my other deliverables, so be it. That makes me unreliable, which is bad. The good side is that it helps me compound my interests over the long term.
Note that this is exactly what happened during COVID: Something weird started happening, and I just dived into it. After 3 weeks of studying non-stop in February 2020, I was in a good position to write about it in a way that resonated.
So what are things that I do that are more or less replicable by others?
I am constantly thinking about why everything is the way it is. I don't just take the shallow answer for granted, but wonder what mechanism, what system brought that thing to life.
I follow my curiosity hard. If something is interesting, I'm going to dive into it at the cost of everything else. I always prioritize important over urgent.
This leads me to be interested in the systems underpinning many different topics. That's what Scott Adams (and now I) call skill stacking. Since most people specialize, you end up having quite a unique combination of areas of expertise.
I focus on both the content and the delivery. Many people dive deep in the content and become world-wide experts on their thing that nobody understands. Others focus on communicating (think some journalists or youtubers), and end up shallow. But if you focus on learning some discipline and also how to communicate it (whether on instagram, writing, videos, tiktoks, tweets, whatever) then you can stand out.
If you focus on communication, you need to be good at both writing and visuals. Writing helps you think. Visuals help you convey the idea.
Quantity beats quality. If you don't produce and publish, you can't learn quality. So you have to start by putting stuff out there fast and frequently. In the beginning, you're going to put out trash anyways, so just accept it and publish it. Nobody cares anyway, and if they do and you fear that, just use a pseudonym
When you hit on something that works, mine that gold mine. I didn't do it on COVID because I just wanted to help and I didn't find it too intellectually stimulating, so once I wrote the pieces I wanted to write, I stopped. If I were to go back, I'd write much more, to gather a bigger audience. I'd do the same with geography, which works well, but I only wrote every now and then about it in the past. So try many things, but if you find your niche and you want to live off of that, focus on that.
I'm sure there's more but I can't think of anything!
Thank you for your kind words and your insightful questions! It is very, very difficult to ask good questions. It is an underestimated skill, and you have it. And it's not just skill, it's also a lot of work. So congrats!