Google DeepMind announced in the Context of the Security of the Border, a group of protokollesh that must be followed in order to identify the capabilities of the next one that may cause harm, and to implement the necessary mitigation. It will update based on the evolution of the technology, and will be applied starting from the beginning of the year 2025.
How to find out, and zbutni risks
It is now clear that the artificial intelligence (gjenerative and so on) are being used more and more in the different sectors to solve complex problems, as well as the study of changes in climate and the detection of the drug is removed. However, along with the ability to grow the risks involved. Today, it is already possible to use technology in the IT activities to be dangerous, including cyber attacks.
The safety and Security of the Border was also presented in order to identify the risks that arise from the development of a model for the future of IT. It consists of three main components. The first involves the classification of the models is based on the levels of critical skills, while taking into account the four areas: security, biosigurinë, security, cybersecurity, and the research and development of a class of machines.
During this phase of their development, and to update the next Google, DeepMind, will carry out the assessment of the present to find out if a pattern is close to reaching a level of critical thinking skill. Based on the results, you will be taken to minimise the risks involved, mainly those related to safety and security (eksfiltrimi of the model) and the placement (abuse of capacity).
If the model exceeds the level of critical thinking ability, prior to the applicable mitigation measures, Google's DeepMind will be suspended in the development and deployment while the measures to be taken to the additional suppression. All the details can be read in the document, the official (PDF).
Discussion about this post