OpenAI has outlined a plan to prevent the worst-case scenarios that could arise from the highly potent artificial intelligence technology it’s currently developing.

The company behind the widely-known ChatGPT chatbot introduced a 27-page document called the “Preparedness Framework” this week, detailing how they track, evaluate, and safeguard against the “disaster risks” posed by advanced AI models.

These risks span from the use of AI models causing massive cybersecurity disruptions to aiding in the creation of biological, chemical, or nuclear weapons.

As part of the checks and balances in this new preparedness framework, OpenAI mentioned that while company leadership holds the decision-making authority on releasing new AI models, the board of directors has the ultimate say and the “right to reverse decisions” made by the OpenAI leadership team.

However, even before the board could veto the deployment of a potentially risky AI model, the company states there are numerous security checks it must undergo beforehand.

A dedicated “preparedness” team will lead most of the multi-pronged efforts to monitor and mitigate potential risks from advanced AI models at OpenAI.

Massachusetts Institute of Technology professor Aleksander Madry is currently on leave from MIT to lead the startup’s preparedness team.

He’ll oversee a group of researchers tasked with carefully evaluating and monitoring potential risks and synthesizing these risks into scorecards. These scorecards categorize certain risks as “low,” “medium,” “high,” or “critical.”

The preparedness framework stipulates that “only models with a post-mitigation score of ‘medium’ or below can be deployed,” and only models with a “post-mitigation score of ‘high’ or below can be developed further.”

The company notes that this document is still in the “beta” stage and is expected to be regularly updated based on feedback.

This framework sheds light on the unusual governance structure within the powerful AI startup, which underwent a board overhaul following corporate turmoil last month, resulting in CEO Sam Altman being fired and then reinstated within just five days.

The highly publicized corporate drama raised questions about Altman’s power within the company he co-founded and the perceived limitations the board had over him and his leadership team.

The current board, which OpenAI describes as “initial” and still in the process of development, consists of three wealthy white men tasked with the significant responsibility of ensuring OpenAI’s cutting-edge technology fulfills its mission to benefit all of humanity.

The lack of diversity in this interim board has faced widespread criticism. Some critics have expressed concerns that relying on self-regulation within companies isn’t sufficient, urging policymakers to do more to ensure the safe development and deployment of AI tools.

These latest proactive safety checks outlined by OpenAI come at a time when the tech sector and beyond have spent the past year debating the potential risks of an AI apocalypse.

Hundreds of top AI scientists and researchers, including Altman from OpenAI and Google Deepmind CEO Demis Hassabis, signed a one-sentence open letter earlier this year stating that mitigating the “risk of extinction from AI” should be a global priority alongside other risks like pandemics and nuclear war.

While the statement garnered widespread concern from the public, some industry observers later accused companies of using distant doomsday scenarios to divert attention from the current risks associated with AI tools.