OpenAI Establishes Safety Committee Amid High-Profile Departures And AI Concerns

In a bid to quell concerns about the responsible development of artificial intelligence, OpenAI has formed a Safety and Security Committee within its board of directors. The panel will be in charge of supervising the creation of AI models, such as the next iteration, after GPT 4 as the organization strives to reach artificial general intelligence (AGI).

Led by CEO Sam Altman and board members Bret Taylor, Adam D’Angelo and Nicole Seligman, the committee will evaluate existing safety practices and make recommendations to the board within 90 days. OpenAI may later publicly share adopted recommendations, consistent with safety and security considerations.

The establishment of the safety committee comes on the heels of high-profile departures from the company, including Chief Scientist Ilya Sutskever and Jan Leike, who were leaders of the Superalignment team focused on long-term AI risks. The Superalignment team was disbanded earlier this month, with some members being reassigned to other groups.

As OpenAI begins training its next-generation AI model, described as bringing its systems to the “next level of capabilities on our path to AGI,” the company faces both anticipation and scrutiny. OpenAIs establishment of the safety committee demonstrates their acknowledgment of the importance of addressing safety issues as AI technologies advance.

Although they take pride in creating cutting edge models that excel in both performance and safety OpenAI encourages a discussion on AI safety at this pivotal stage, in the technologys evolution.