Sam Altman steps down from safety committee as OpenAI faces scrutiny

Former researchers have accused Altman of prioritizing OpenAI’s corporate interests over genuine AI regulation.

By
  • Storyboard18,
| September 17, 2024 , 9:59 am
OpenAI has announced that its CEO, Sam Altman, is stepping down from the Safety and Security Committee, an internal group established to oversee critical safety decisions for the company's projects. (Image source: Moneycontrol)
OpenAI has announced that its CEO, Sam Altman, is stepping down from the Safety and Security Committee, an internal group established to oversee critical safety decisions for the company's projects. (Image source: Moneycontrol)

OpenAI has announced that its CEO, Sam Altman, is stepping down from the Safety and Security Committee, an internal group established to oversee critical safety decisions for the company’s projects. The committee will now function as an independent board oversight group, chaired by Carnegie Mellon professor Zico Kolter. Other members include Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman, all of whom are also on OpenAI’s board of directors, as per reports.

OpenAI’s Safety and Security Committee, which conducted a safety review of its latest AI model, o1, after Sam Altman’s departure, will continue to function independently. The committee will receive regular updates from OpenAI’s safety and security teams and retains the authority to delay model releases until safety concerns are fully addressed, as per reports.

OpenAI’s Safety and Security Committee will continue to receive regular updates on the technical assessments of current and future AI models, as well as ongoing post-release monitoring. The company is also implementing a new safety and security framework with specific success criteria for launching models.

Sam Altman’s decision to step down from the Safety and Security Committee follows concerns raised by five U.S. senators in a letter to him this summer. Additionally, many OpenAI staff members who previously focused on AI safety have left the company, and former researchers have accused Altman of prioritizing OpenAI’s corporate interests over genuine AI regulation.

In an op-ed for The Economist, former OpenAI board members Helen Toner and Tasha McCauley expressed concerns about the company’s ability to hold itself accountable, citing the potential influence of profit incentives. With rumors of a new funding round that could value OpenAI at over $150 billion, the company’s profit motives may be further amplified. To secure this funding, OpenAI might abandon its hybrid nonprofit structure, potentially compromising its commitment to developing artificial general intelligence that benefits all of humanity.

Leave a comment