
William Saunders, a former technical staff member at OpenAI, has raised concerns about the company’s trajectory, likening it to the sinking of the Titanic. Saunders, who worked at OpenAI for three years as part of the super-alignment team, shared his apprehensions on a recent podcast, stating, “I really didn’t want to end up working for the Titanic of AI, and so that’s why I resigned.”
During the podcast, Saunders elaborated on his doubts, often questioning whether OpenAI’s direction was more akin to the Apollo program or the doomed Titanic.
“They’re on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling,” he said.
OpenAI, the company behind ChatGPT, is striving to achieve artificial general intelligence (AGI), where AI can teach itself. However, Saunders believes that the company’s leadership is making decisions reminiscent of “building the Titanic, prioritizing getting out newer, shinier products” rather than adopting a more cautious and risk-averse approach akin to the Apollo space program.
Saunders emphasized that the Apollo program was about carefully predicting and assessing risks. “Even when big problems happened, like Apollo 13, they had enough sort of like redundancy, and were able to adapt to the situation in order to bring everyone back safely,” he explained.
In an interview with Business Insider, Saunders expressed his concern that a “Titanic disaster” for AI could manifest in several ways, including an AI model launching a large-scale cyberattack, persuading people for specific campaigns, or aiding in the construction of advanced weapons.
“If in the future we build AI systems as smart or smarter than most humans, we will need techniques to be able to tell if these systems are hiding capabilities or motivations,” he warned.



