OpenAI and Google DeepMind employees jointly voiced concerns that advanced AI risks are huge and that regulation is urgently needed. In response, OpenAI issued a statement today, emphasizing its commitment to providing powerful and safe artificial intelligence systems.
OpenAI said that given the importance of AI technology, we agree with the content of the open letter. How to have a serious discussion is crucial to better promote the development of AI technology.
We will continue to reach out to governments, civil society and other communities around the world to jointly create a harmonious AI environment.
Including anonymous integrity hotlines, the Safety and Security Committee (Safety and Security Committee) participated by board members and company safety leaders, are effective means of regulating AI.
OpenAI pointed out that the company will not release new AI technology until the necessary safeguards are in place. The company reiterated its support for government regulation and participated in voluntary commitments on artificial intelligence safety.
Regarding concerns about retaliation, a spokesperson confirmed that the company has terminated non-disparagement agreements for all former employees and removed such clauses from standard resignation documents. (IT Home)
Earlier today, 13 former and current employees from OpenAI (ChatGPT), Anthropic (Claude), DeepMind (Google), and experts in the field of artificial intelligence launched a petition for the "Right to Warn About AI". The petition aims to advocate for stronger whistleblower protection to increase public awareness of the risks of advanced artificial intelligence systems. Former OpenAI employee William Saunders commented that when dealing with potentially dangerous new technologies, there should be ways to share information about risks with independent experts, governments, and the public.