According to the words of the Anthropic, the startup of the one who made the chatbotin of the HE and Claude: “through the window, for the prevention and pro-active risk is close to the speed”. Systems of artificial intelligence they have made significant progress to be exceptional at the time, the record, in a variety of areas such as cybersecurity, engineering, and software, and the meaning of the scientific research.
But, as with every success, there is a price to be paid for, and as, on the one hand, these developments provide a lot of opportunities, on the other hand, they may also attract the interest of less noble, and even destructive.
The progress of a thing incredible with you, for HE is
In a post on the blog Anthropic highlighted the progress that the the models HE they have done the coding and the cyber attacks in a single year. “The task of the software engineering group, SWE, the models have been enhanced by being in a position to address the 1,96% of the group testing problem of coding in the real world (Claude 2 October 2023) at 13.5% a (Devin-march 2024) to 49% (Claude 3.5 Sonnet, oct 2024),” wrote the company.
“On the inside, the Red Team, our has been discovered that the present, are now able to help with a wide range of tasks, to cyber attacks, and we look forward to the next generation of models, all of which will be in a position to plan for the assignment of the term, and at some stage, it will be even more effective”.
Moreover, the post on the blog points out that the the systems in IT improved their understanding of the scientific, with nearly 18% to only from June to September of this year, according to the test standard GPQA. OpenAI t1 showed 77.3% in the most difficult of the test; the Experts of the human recorded 81.2%.
Antropik regulation of HIM, to avoid the risks of catastrophic
Anthropic but warns governments to the risks to cybersecurity and CBRN (chemical, biological, radiological, and nuclear). According to the company, and, in fact, these systems have gained the skills to allow them to perform the activities related to the the crime cyber and the proliferation of weapons of mass destruction with an efficiency more higher.
Start-to-founded by the brothers Amodei (a former member of the old OpenAI), they have decided to be pro-active and supports the regulation of the target of IT, on the basis of the “Responsible Scaling Policy” (RSP). The objective of this policy is to identify, assess, and mitigate risks to the catastrophic, in a manner that is proportionate, based on the limits of the capacity to reach out to the models.
According to the Anthropicthis regulation, in order to be successful, it should be encouraged transparency, the adoption of the best practices of safety and security, and simplicity, and attention to the masses.
The policy for Responsible and ratings as well as a prototype policy
Anthropic call for a collective attempt to shield us from the excesses of IT. It is a call to the truth, as it is shown in its blog that the company submits to, by inviting policy makers, industry, the, HE, she, security, and civil society, and lawmakers to work together in the 18 months to the future, to develop a regulatory framework, effective, and inspired by the principles of the Responsible Scaling Policy.
Moreover, he warns about the consequences of a disastrous inaction or reaction very slow out of the public authorities. The risk would have been were put in a “worst of both worlds”, the d.m.th. an arrangement of the “thought of the bad, and the instinktiv,” which hinders the innovation, without preventing the risks involved.
Discussion about this post