According to official news, OpenAI released a security guide called "Preparedness Framework" on its official website, which stipulates "a process for tracking, evaluating, predicting, and preventing catastrophic risks brought by increasingly powerful models."
OpenAI explains that research into the risks of cutting-edge artificial intelligence falls far short of what is needed. To address this gap and systematize security thinking, OpenAI is adopting a beta version of the Readiness Framework.
OpenAI announced in a press release that a "Preparedness team" will work to ensure the security of cutting-edge artificial intelligence models. The Readiness Team will continually evaluate AI systems to understand how they perform across four different risk categories, including potential cybersecurity issues, chemical threats, nuclear threats, and biological threats, and work to mitigate any harm the technology may cause. .
Specifically, OpenAI is monitoring so-called "catastrophic" risks, which are defined in the guidance as "any risk that could result in hundreds of billions of dollars in economic losses or cause serious injury or even death to many people."
Notably, leadership can decide whether to release new AI models based on these reports, but the board has the power to overturn its decision, according to the safety guidelines.