OpenAI announced in a post at the blog in the transformation of the Committee of Safety, a “member of the Committee to the Supervisory Board of the” self-reliant. This decision was taken by the commission, after review, 90-day processes of the company, and the measures taken to safety.
OpenAI, the committee is independent, you can block the rendering of the models to HIM
The young man, led by Zico Kolter, and made by Adam D'Angelo, Paul Nakasone, and Nicole Seligman, will have the task of overseeing the release of a model of the artificial intelligence developed by OpenAI. The management of the company, it shall inform the committee on the assessment of the security weaknesses of the models, the main committee and the committee shall have the authority to delay the issue until it is solved each and every concern of safety and security. Furthermore, the Board of Directors of OpenAI will receive information periodically about the issues related to safety and security.
The independence of the commission, and to compare it with the Supervisory Board Faulty
Whereas, the members of the committee also serve on the board of the most widespread of directors of OpenAI, the company has not made it clear how much the self is in fact the committee is, or how it is structured.
The approach seems somewhat similar to the one that gets the Bugs with the Board and its Oversight board, which reviews some of the decisions of the content of the company, you can make decisions binding on the Meta. However, in contrast to the OpenAI, one of the members of the Supervisory Board of the Metës does not serve on the board of directors of the company.
Opportunity for collaboration and transparency in the area of artificial intelligence
The review was conducted by the Committee of Safety, and the safety and Security of the OpenAI stressed the possibility of further co-operation of industry and information sharing in order to advance security in the industry of IT.
The company is committed to find more ways to share and explain the job to her safety and security, as well as to create more opportunities for substantive testing of the systems in it. This increased transparency can help build public trust in the technology as well as to promote a public debate informed of the risks and benefits of the potential of these systems are getting more and more advanced.
Discussion about this post