Discover more from NeoNarrative
Following Sam Altman’s Rapidly Changing Destiny
Updates on the OpenAI situation, with perspectives at the end
What we know so far about the situation
Here’s a breakdown of what I’ve been able to glean:
Last Friday the OpenAI Board of Directors very suddenly fired the company’s co-founder and CEO Sam Altman, citing that he had not been “consistently candid” in his communications. No one knows yet what the relevant context is around the comment, and no further clarification has been offered by the Board since. Speculations about the Board’s motives have ranged from a coup driven by fear of a Large Language Model that’s recently achieved super-intelligence (which sounds like a plot from Spielberg), to a garden-variety power grab, to Sam authorizing a less than legal acquisition of AI training data.
The company’s President and co-founder, Greg Brockman, immediately quit in support of Sam “after learning today’s news,” he said in a tweet. In an employee-signed open letter calling for the Board's resignation, Sam’s firing was called “unexpected,” suggesting no prior warning was known internally. The OpenAI Board quickly appointed CTO Mira Murati as interim CEO, which lasted only a brief two days following Sam’s departure likely because she planned to rehire Altman and Brockman. Now former twitch CEO Emmett Shear, an apparent AI “decelerationist,” has been appointed OpenAI’s interim CEO, with many digging into his Twitter history to discern his or the Board’s motives.
Bloomberg’s Ashlee Vance reported that 700 of OpenAI’s 770 employees have stated they will resign unless Sam is reinstated, and Ilya Sutskever, a Board member who voted in favor of Sam’s removal, has also threatened to resign if Sam isn’t reinstated, adding another viscous layer to the thick web of confusion draping over the whole situation. After the call for Sam’s reinstatement, he snapped a pic of himself entering OpenAI’s offices, and it seemed briefly that he would end up in an even better position to steward the AI revolution to stellar heights, with a new board and a reinvigorated team behind him. But with outstanding corporate adroitness, Microsoft’s CEO Satya Nadella, OpenAI’s biggest shareholder, announced Monday that Sam would lead an advanced AI research team for Microsoft, seemingly squaring the circle.
Now several of OpenAI’s top investors are reportedly attempting to return Sam to his role as CEO, and Microsoft is unopposed to the new idea despite the announcement of Sam’s new role made by Satya. Reuters also reports that several of OpenAI’s investors are exploring the possibility of taking legal action against the Board for its recent mess. Sam Altman’s future is going to flow in a positive direction no matter what, but for now his former Board of Directors has made these waters deeply turbid.
Tech giants make quarterly earnings news, lean startups make history. If Sam does join Microsoft, he’ll have an outstanding career, but we may never see groundbreaking AI products again. Google sat on their large language model for years, made several major breakthroughs in AI research and development, and has remained a dead player in the commercial AI space. And why wouldn’t they be such? They have very little incentive to undertake building a line of new AI-focused consumer products when the maximal upside would likely be dwarfed by their ad revenue alone. Microsoft will probably bring Sam down the same road of high advancement with little commercial output. Small, new, companies with a single disruptive mission are the only way innovative revolutions emerge, other than war. Facebook could never have grown out of or with IBM, Twitter couldn’t be a subsidiary of Dell. Innovation is inherently a weird behavior, innovators are not the same people they eventually employ. They stretch every rule they can find and break one’s that stand in the way. Large, bureaucratic, and lawyer-laden corporations don’t do this. And the kind of innovators that tend to be attracted to the AI space are even weirder. AI might just be the least amenable business model to normal corporate development, because it attracts people obsessed with building an angel. Microsoft, and the whole world, would be better served by figuring out how to get Altman back at the helm of OpenAi.
I keep reading that OpenAI’s Board is filled with “Effective Altruists” (EAs), a part-rationalist, part-social utilitarian, part-individual-level deontological community centered mostly in San Francisco and New York, and interested in optimizing charitable giving and staving off existential risk from AI. I don’t know if it’s true that members of the Board are EAs, but since people attracted to working on AI are pretty weird, and since EAs are big on AI safety, it’s not unlikely. Effective Altruism came under tremendous fire earlier this year for being an essential part of the philosophy that drove Sam Bankman-Fried to steal several billion dollars, and now the movement faces backlash again for likely putting an unhelpful pause to strong AI development by bringing about the firing of the man most likely to make it happen. I have yet to read any decent critiques of the EA community’s moral framework and how it leads to disaster, so I have a few thoughts. 1) It seems as if the community were built by simply making Goodhart’s Law an operating principle, ensuring that measurements used to gauge outcomes become more important than the outcomes themselves. For example, I recently read a prominent Effective Altruist’s account of donating a kidney. On the one hand organ donation can be laudable—stories of wives giving kidneys or livers to husbands really send a bolt through the romantic nerve in the heart. But on the other hand, if your desired outcome is being helpful to other people, especially ones you don’t know, there are better ways that don’t involve being anesthetized. The moral reasoning that went into his donation was based around the absolute and relative risk of having only one kidney being so infinitesimal that it was only rational to give it away. Using statistical probabilities to find low risk actions with some marginal and easily quantifiable benefit seemed to be the target, not the person on the receiving end. And every goal I’ve seen EAs pursue has been this way—follow the numbers, obey the numbers. This is the problem on the personal level, EAs as far as I’ve seen don’t seem to understand people.
On the social level, their utilitarianism comes off as a kind of communism for ethics—an attempt to centrally plan what’s best for the greatest number of people (maybe this is just normal utilitarianism). This never works because It’s hard to know what’s best for most people, and trying to solve the world's problems this way has almost always had tremendous negative externalities. Markets have succeeded at things like reducing extreme poverty worldwide and turning countries like China and India into superpowers for a reason, and the best ethical framework either is simply a market, or it probably looks something like one: it lets people tell you what they need and what outcomes they’d like to see, without trying to predict all of this in advance. Trying to centrally plan the future of humanity will always turn out grim and unsuccessful. The EAs plan to shut down or pause AI development will turn out the same way.
Doomers rule over the AI debate because they’re the only ones discussing AI with any imagination. I think they’re all wrong, but at least they’ve got me thinking. Part of the problem AI optimists have is that they don’t know what we’re dealing with, or at least that’s how the problem seems on the surface. Beneath the waves I think the problem is more or less that we don’t know what consciousness is, and this stops a lot of people from making positive value judgments about artificial consciousness. Fortunately I don’t think it’s necessary to know precisely what it is, just what it does. Consciousness is the only substance in the universe known to reverse entropy, and there’s nothing more valuable than that in our cosmic cup of a world brimming with disorder. It’s essential to create a lot more of it. Through more births, through open source AI.
There won’t be any paperclip maximizing
Final note: One thing AI doomers never seem to mention is that you can’t do things of any real significance without the help of other people. How would a paperclip maximizer turn everyone into dross without sound industrial backing, a good team to help refine the path to success? The law of attraction? I don’t think so. I’m simplifying the doomer arguments quite a bit, but I think there’s a real point there.
People are everything, optimists know that. That’s why they’ll win.
NeoNarrative is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.