Guest User

Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’

a guest
Nov 17th, 2023
160
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.09 KB | None | 0 0
  1. OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing artificial intelligence safely enough, according to people with knowledge of the situation.
  2.  
  3. Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
  4.  
  5. At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.
  6.  
  7. THE TAKEAWAY
  8.  
  9. • At OpenAI, divisions persisted over AI ‘safety’
  10.  
  11. • Co-founder Ilya Sutskever led an AI safety team
  12.  
  13. • Sutskever took pointed questions from staff on Friday
  14.  
  15. “You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.
  16.  
  17. When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
  18.  
  19. While the six-member board that fired Altman didn’t explain the reasons for the move, other than saying Altman “was not consistently candid in his communications with the board,” safety was a big theme in the company’s internal damage control following the firing. (Altman didn’t participate in the vote to oust him, the company told staff.)
  20.  
  21. For instance, in a memo to employees on Friday, Interim CEO Mira Murati, who has been overseeing many of the company’s teams, referenced “our mission and ability to develop safe and beneficial AGI together.” She said the company had three pillars: “maximally advancing our research plan, our safety and alignment work—particularly our ability to scientifically predict capabilities and risks, and sharing our technology with the world in ways that are beneficial to all.”
  22.  
  23. Altman co-founded OpenAI as a non-profit in 2015 with Sutskever, Tesla CEO Elon Musk and others to act as a counterweight to Google and other for-profit companies that had pioneered AI and recruited the world’s best AI researchers. The OpenAI co-founders were concerned that a for-profit motive would come at the expense of society’s safety. And that was a big part of their pitch in recruiting employees over the years.
  24.  
  25. But in 2019, when Altman became CEO of OpenAI, he helped form a for-profit entity governed by the OpenAI non-profit so that it could raise money from outside investors like Microsoft in order to have enough servers to train the best AI. His solution for limiting the power of for-profit greed: a theoretical cap on the profits the company could generate for its principals and investors.
  26.  
  27. But concerns about AI safety divided the startup’s leaders. In late 2020, a group of employees split off from OpenAI to launch their own startup, Anthropic, because of differences about the company’s commercial strategy and the pace at which it released its technology. In order to ensure the startup’s AI development wouldn’t be influenced by financial incentives, Anthropic formed an independent, five-person group that can hire and fire the company’s board.
  28.  
  29. For years, AI practitioners have raised concerns that more powerful models could be misused in the wrong hands, like developing biological weapons, generating convincing deepfakes or hacking into critical infrastructure. Some also believe that the systems could eventually become autonomous and go rogue at the expense of humans. Proponents of “AI alignment,” which refers to techniques to prevent those harms, have generally advocated for more guardrails on how the technology is used, including limiting the number of people who can access it.
  30.  
  31. As OpenAI developed groundbreaking artificial intelligence that can automate writing and software coding and generate realistic images from scratch, some employees at the company have discussed the potential harms of such AI—especially if it learned how to improve itself.
  32.  
  33. The company this summer established a team, co-led by Sutskever and “alignment” researcher Jan Leike, to work on technical solutions to prevent its AI systems from running rogue. In a blog post, OpenAI said it would dedicate a fifth of its computing resources to solving threats from “superintelligence,” which Sutskever and Leike wrote “could lead to the disempowerment of humanity or even human extinction.”
  34.  
  35. Sutskever has frequently spoken about the dangers of AI. In a 2019 documentary, “iHuman,” the researcher mused, “The future is going to be good for the AIs regardless; it would be nice if it would be good for humans as well.” More recently in a July interview, Sutskever said that he was most concerned about the dangers of powerful AGI years into the future, but hoped for a future where AI technology could help humans police itself, perhaps an example of this “superintelligence alignment.”
  36.  
  37. Sutskever likely has many followers within the company. Former employees describe him as a well-respected and hands-on leader who’s crucial for guiding the startup’s frontier tech.
  38.  
  39. The blog post announcing Altman’s firing on Friday concluded by stating the board’s responsibility was to preserve the company’s charter, which says the board must avoid “enabling AI or AGI that harm humanity or unduly concentrate power,” and doing “the research required to make AGI safe.” It also said the board’s “primary fiduciary duty is to humanity.”
  40.  
  41.  
Add Comment
Please, Sign In to add comment