Understanding OpenAI's Recent Events: Reflections on Leadership
Written on
Chapter 1: The OpenAI Saga
This marks my fifth and final piece regarding the recent upheaval at OpenAI (you can find the previous four articles here, here, here, and here). This article serves as a more measured contemplation of the events that transpired from Friday's dismissal of CEO Sam Altman to his reinstatement on Tuesday. It aims to provide a defense of both OpenAI and Altman, though not for the reasons one might expect. Instead, it seeks to unravel the board's motivations and the underlying disconnect that led to the situation in the first place. Additionally, it offers some unsolicited advice about how OpenAI can enhance its operations moving forward—advice that, surprisingly, is less about recent incidents and more about the organization's foundational principles and early missteps.
The crisis has subsided. Order has been restored. Altman is back, along with Greg Brockman. The non-profit board has undergone significant changes—both Altman and Brockman have stepped down, alongside three of the four members involved in the coup: Ilya Sutskever, Helen Toner, and Sasha McCauley. Adam D’Angelo remains on the board, which has now welcomed new members Bret Taylor and Larry Summers (whether these are wise choices is left for you to judge). The board is set to expand to nine seats, including at least one for Microsoft.
In many respects, particularly in business and technology, it feels as though the past week’s events never occurred. OpenAI employees have continued to work diligently, and ChatGPT remains operational; users and clients can breathe a sigh of relief. However, there are still unresolved issues that warrant further discussion. Notably, the former board agreed to Altman's return only after he and Brockman relinquished their board seats. Furthermore, they mandated a more formal inquiry to clarify the reasons behind Altman's initial dismissal.
So, while normalcy has returned for employees, customers, and investors, the same cannot be said for OpenAI and its mission. The company's values have not shifted, but the recent turmoil has highlighted deeper fissures that had long been hidden beneath the surface, transforming previously minor disagreements into public scrutiny.
Section 1.1: The Distinctiveness of OpenAI
When I step back to assess the recent events, I see the same AI startup I've been observing for years, but now it feels more transparent (finally). It seems as though it has undergone a significant internal shake-up, revealing its raw and unrefined essence.
This situation has granted us an unusual glimpse into the typical power struggles and behind-the-scenes maneuvering that occur within prominent companies—both established and emerging. Such dynamics are commonplace, but we rarely gain direct insight. OpenAI operates under a public spotlight, despite its private status and growing secrecy regarding its scientific and engineering efforts. It's akin to a celebrity startup; we know its players, their objectives, and their motivations. The recent events have made this reality more evident than ever.
OpenAI's notoriety is not arbitrary; it stems from two specific realities. First, the company boldly claims to be working on what could be the most pivotal technology humanity has ever developed: AGI (artificial general intelligence), which is described as "a highly autonomous system that outperforms humans at most economically valuable work." The advent of AGI promises to redefine societal norms.
Second, regardless of whether OpenAI ultimately realizes this ambition, it has already made significant strides towards it—though not everyone concurs. The releases of GPT-2, GPT-3, GPT-4, and ChatGPT have enabled people to experience firsthand the capabilities of AI, stirring imaginations about its future potential. This unprecedented development has placed reality in competition with popular science fiction for the first time.
OpenAI's lofty aspirations and achievements have positioned it firmly in the limelight. Its ambitious goals and the impressive milestones it has reached have drawn both public interest and scrutiny. The company's recent turmoil has received heightened attention simply because OpenAI has branded itself as a unique entity, and to a commendable extent, it has proven that claim.
What transpired recently is not unusual; what’s unusual is OpenAI's self-perception.
Section 1.2: The Founders’ Original Sin
Some of the attention directed at OpenAI is inherently critical. However, is the pursuit of AGI inherently problematic or worthy of condemnation? I argue it is not—constructing AGI is far more complex than merely critiquing the endeavor. While hype can be detrimental, the absence of it can be even worse. This crisis has amplified the criticisms directed at the company for years (and I am just as guilty).
I believe the events surrounding Altman's firing and subsequent reinstatement would have been of little concern to the press or public opinion had it not been for one crucial factor: OpenAI's initial commitment was not merely to develop AGI, but to ensure it "benefits all of humanity."
This ambitious goal was not imposed by external forces; rather, OpenAI embraced it out of a deep-seated belief that AGI should be safe and advantageous for humanity. While this ideal is commendable, it has proven exceedingly challenging—even as we remain far from achieving AGI—leading to significant hurdles for OpenAI's mission. I doubt anyone outside the company genuinely believed this ideal could come to fruition. The OpenAI leadership exhibited a sort of naïveté. I do not subscribe to the notion that they thought such lofty standards would attract superior talent or create more opportunities. It was a mistake born from their most sincere idealism—the original sin of OpenAI's founders.
This foundational error has inadvertently led to a cascade of missteps over time. It compelled the founders to establish OpenAI as a non-profit, which resulted in the now-infamous board structure. Upon realizing they needed substantial funding that could only come from Big Tech, they found the non-profit model unsustainable. This led to a public reversal—facing warranted criticism—adopting a capped-profit framework and forging a partnership with Microsoft, ultimately jeopardizing their original mission (we later learned that Elon Musk’s withdrawal was a pivotal factor that drove Altman to seek investment from Satya Nadella). The grip of capital had finally constricted what began as an altruistic endeavor.
OpenAI also vowed to prioritize AI safety and alignment above all else. If the company had to be dismantled to ensure safety, the board would do so (critics of the board might dismiss this notion, but consider the implications of similar decisions regarding nuclear weapons, for instance). If anyone, including Altman, strayed from the safest path, the board retained the authority to dismiss them. If investors attempted to pressure the organization to pursue a high-growth, low-safety trajectory, the board would sever ties. This was crucial—at least according to Altman, just before his dismissal.
The founders collectively agreed that this was the essence of OpenAI—not merely an AGI company, but a safety-centric AGI organization, positioned as a counterpart to DeepMind, which had effectively become part of Google by that time. Many individuals, both within and outside the company, found enthusiasm in this resolute set of principles. Yet, principles exist within the realm of theory. As the saying goes, “no plan survives contact with the enemy.” The real world, an omnipresent adversary for all idealists, has an uncanny ability to disrupt even the most meticulously crafted plans.
Chapter 2: The Need for Change
As time passed, possibly due to ChatGPT's remarkable and unforeseen success, or perhaps due to a shift in beliefs for undisclosed reasons, OpenAI experienced internal discrepancies that had previously remained dormant. Perhaps these differences always existed as seeds, waiting for the right conditions to flourish; for instance, Altman and Sutskever possess markedly different backgrounds, indicating a fundamental divergence in their approaches to AGI. Regardless, it appears that as reality evolved and their predictions adapted, those formerly minor disagreements became intolerable.
But such occurrences are common, happening everywhere and all the time. The reason this past week's events garnered so much attention is that OpenAI represented hope—a potential breakthrough emerging from an ostensibly altruistic premise that became increasingly challenging as they approached AGI. The founders set a bar so high that it resembled running uphill with their feet bound together. Had OpenAI begun as a typical for-profit entity—like most AI companies—would it have faced such public backlash over recent events? I think not.
What sets OpenAI apart—and makes it particularly vulnerable to criticism—is its self-imposed higher standard. We have evaluated it against this standard, which was inherently unachievable.
Section 2.1: Recommendations for OpenAI
What is problematic about OpenAI's principles? A technology that is beneficial to all humanity can be interpreted in two ways. First, I believe OpenAI always intended for AGI to be advantageous to everyone from the outset. Perhaps not universally equal (which is theoretically impossible), but rather as technology designed to be a net positive for the world (I believe they believed this was feasible; however, I do not).
Second, the alternative interpretation is that when OpenAI claims to be “beneficial for all,” it implies that the benefits will materialize at an unpredictable, arbitrarily distant future point. This makes evaluation difficult and trivializes the concept. To some extent, all technologies can be deemed beneficial if given enough time. Consider the impact of agriculture, writing, fire, or the wheel—AGI will likely meet the same criteria. However, this is a mere semantic trick.
To avoid such misunderstandings and prevent future backlash, OpenAI should lower the pedestal on which it places itself and, like other companies, operate as a standard enterprise, even if it continues to pursue an extraordinary goal. Thus, here is some unsolicited advice for OpenAI:
- Move away from the narrative of being controlled by a non-profit: It's clear that financial interests drive the company, as is inevitable in a capitalist society. This is acceptable. What is problematic is the pretentiousness with which they project a moral high ground to construct an appealing narrative for the world.
- Abandon the “for all humanity” rhetoric: This gives the impression that OpenAI is a savior (a role that rarely ends well). Universal human values do not exist. Humanity is in constant conflict; our realities are so disparate that there is often no consensus even within ourselves. You are developing AI systems by scraping data from people and employing vulnerable populations for training, only for the resultant products to displace millions of jobs. While the world is imperfect, do not claim that AI will resolve all social and political issues—particularly those it may exacerbate—through a technological panacea.
- Cease the portrayal of being the most vital group of individuals alive: This framing creates an almost cult-like atmosphere. The bubble in which you operate distances you from the rest of society. Perhaps you don’t care, but this attitude shapes your relationship with the world. Do people genuinely desire what you are creating? Do you even care? These are crucial questions.
If OpenAI embraces these changes, it will flourish—as it should have all along, particularly over the past weekend. Substantial criticism will diminish, except for discussions regarding methodology. Outcomes only become problematic or beneficial against the standards we establish. Google dismissed its leading ethics AI researchers a couple of years ago. Microsoft disbanded its responsible AI team earlier this year. Meta quietly did the same over the weekend. Yet, it is OpenAI that faces the most scrutiny simply because it appears to prioritize safety, while Altman is eager for rapid growth. This discrepancy seems illogical.
Criticism arises when expectations fail to align with reality. Avoid this pitfall. OpenAI is an imperfect organization, like all others, with areas for improvement both internally and externally. It is navigating the challenges of an imperfect game, yet it is engaged in valuable work. Not everyone appreciates this aspect, but it deserves recognition and respect.
Section 2.2: The Importance of Transparency
AI is beneficial. AGI will be, too. Both could potentially pose risks, like any other technology. It is advantageous to have diverse perspectives working on this, and the more ethically they operate, the better (this is undoubtedly an area where OpenAI and others could enhance their efforts). From my viewpoint, neither OpenAI, Altman, nor the board acted inappropriately this weekend (at least, not any more than usual), given the available information (the board did not provide evidence of misconduct and explicitly stated that was not the reason for Altman’s ouster).
Does the psychology of leadership influence the company, employee mindsets, and the public's perception? Certainly. Is Altman exceptionally ambitious? Persuasive? Power-hungry? Good for him, I suppose. He seems poised to achieve his desires, as he has in recent years. Will he prioritize growth and value over safety? Likely. Will he pursue other ventures (e.g., AI hardware, devices, energy, etc.)? That’s fine. However, let’s not forget he could have chosen alternative paths—some potentially more profitable and less beneficial to the world at large—that would have suited him just as well. Are Altman’s choices the best for a company purportedly developing technology that will impact us all? I cannot say.
The same reasoning applies to the board members behind the coup. They had their motivations, evident in the fact that they only agreed to Altman’s rehiring after he surrendered his board position and accepted the impending investigation. Without financial incentives, their actions seem driven by a desire to further the company’s impossible mission. Was the board's decision a bid for power or a measure to limit Altman's influence? I cannot determine.
Both Altman and the board acted out of self-interest, as we all do. Fortunately, it is in their best interest to operate in a way that broadly benefits society. The attempt to serve everyone is what led to the clash that ultimately benefited no one. It would be prudent for OpenAI to temper its ambitions (or at least their scope) while simultaneously enhancing its transparency. This weekend’s key takeaway is the need for more openness, clearer incentives, and especially greater honesty in communicating with the world. This is what I desire, and it is what they should have offered from the outset. No one would dare criticize that, regardless of profit motivations or slightly less grandiose objectives than making AGI the cornerstone of a post-scarcity world of abundance for all.
Some individuals have sided with the board (I anticipate more will do so following the investigation if we ever uncover what truly transpired). Others have aligned with Altman and Brockman (most OpenAI employees did). The outcome will largely depend on pre-existing beliefs about what is best for the world. The events of the past few days, along with Altman's actions (both known and unknown), are unlikely to fundamentally alter our perceptions of OpenAI—the organization and its business—compared to Friday.
What should shift our understanding of OpenAI is recognizing what it is at its core—not merely a consequence of the recent crisis, but what has always been evident: its intrinsic inconsistencies and implausible aspirations have been illuminated by these events.
To wrap up with respect to the outstanding issues, I predict they will be resolved swiftly and quietly. No new revelations will emerge, but rather clarifications of ongoing matters that have likely been in play for months, if not years. This is merely new information for outsiders. The board's decision was simply an error-correction mechanism functioning as intended, aligned with its mission.
I hope that the team at OpenAI will reflect on their identity and consider how they wish to be perceived by the world.
This article is a selection from The Algorithmic Bridge, an educational newsletter aimed at bridging the gap between AI, algorithms, and people.
The first video features OpenAI CEO Sam Altman discussing the future of AI, offering insights into the company's vision and challenges.
The second video presents a critical analysis of OpenAI's strategies, featuring perspectives on the company's direction and its leadership decisions.