More recently, two studies have shed new light on an aspect of concern for artificial intelligence: the ability to lie and manipulate. To be released, respectively, in PNAS, and Patterns, these studies highlight that, as well as the major language (LLM), may also exhibit behavior is fraudulent in any way surprising.
Two of the studies LLM
In the study, published in PNAS, practical, of ethics, Thilo Hagendorff, University of Shtutgartit explored how the models of the advanced features such as GPT-4 may be a subject to conduct “makiaveliste”, the manipulative and morally questionable. In the experiments conducted reveal that the GPT-4 showed the fraudulent behavior in the 99,16% of the cases during the standardized test, raising concerns of a significant connection with the accession of ethics of these technologies.
In parallel, a study of the Patterns of the guide by Peter Park, at the Institute of Technology in Massachusetts dealt with the model of Cicero, of Metës, well-known for his skills in the game, the strategy, the policy, “Diplomacy”. The research team found that the Ciceroni, not just light in the fraud, but it also appears to improve the ability to lie as much as possible to be used. This phenomenon has been described as “the manipulation of the obvious”, and suggest that the ability to be conscious of violating the trust and for the communication of the information to be false.
Hagendorff points out that, even though HE did not have a purpose for the human, the problem of fraud and raises fundamental questions about the ethics and the reliability of such systems. On the other hand, the Patterns shows that the Ciceroni violates the intent, the rules of the game and the expectations of the developers to it, demonstrating a capacity to develop to the betrayal and manipulation.
The company's Shortcomings, he has responded to the concerns that have been raised, pointed out that the model, as Cicero has been designed and trained specifically for the purpose of play grounds, as well as in the game “Diplomacy”, and not for the purpose of most of the range of the interactions between the human, or the making of significant ethical.
Where do we go?
The possibility that HE could be trained to conduct fraudulent raises important questions about the use and future of these technologies and, in particular, in the field, the critical control of the gun, or cybersecurity. The risk of abuse, or manipulation accidental or deliberate, may have significant consequences for global security, and a belief in the advancement of technology.
As the intelligent man continues to evolve and be integrated more and more into our lives on a daily basis, it is essential to monitor and fix it with the care and development of their own to ensure that they are used responsibly and safely, while minimizing the risks involved and the potential for the company and the rights of the human.
Discussion about this post