Over 130 scientists have signed a piece of paper on which are the instructions and a commitment to the development of responsible artificial intelligence in the field of biology. The risk, he pointed out, the same company that is providing the models, is to use the potential of the technology for the creation of biological weapons.
We need to have rules and controls
The researchers point to the importance of the artificial intelligence in the field of research and, in particular, to the design of new protein. At present, the benefits of the technology are far greater than the possible risks to the world. However, taking into account the progress of thought, may be what is needed is a new approach of proactive risk management.
There is no risk associated with the design of the proteins until they are produced in the laboratory through synthesis of DNA. Therefore, the researchers suggest that the creation of a list of suppliers that adhere to the standards of the specific security. The main objective is the detection of any biomolekule be dangerous.
In testimony to the Congress of the United States, the cofounder and CEO of the Anthropic (Dario Amodei) stated that the intelligence, the artificial generation and will allow for the development of the new weapons, the biological within two to three years. This threat was underlined by another co-founder (Jack Clark) at a meeting of the United Nations.
Chatbot-based on the second generation current is not more useful than a search engine. But researchers are beginning to design a system to IT in order to speed up the development of medicines, and the vaksinave, which can be used to create a biological weapon if it is not implemented the necessary measures for the security. However, it is not clear how this should work in the following measure. The first step is to allow access to the patterns in IT. Unfortunately, the gains often have an advantage over the transparency.
Discussion about this post