As technologies advance, there is a new trend: the use of models of artificial intelligence to review the responses of the human. This is a practice more and more and considering it raises important questions about the credibility and ethics of the results obtained, in particular, when the model is capable of self-awareness.
The progress of the models, a large language has resulted in a debate on the new scientific research. William Agnew and his team at the University of Carnegie-Mellon analysis of the effectiveness and the implications for the ethics of the use of these models in the country, the participants of the human.
An interesting example comes from a study where the GPT-3, was testua to pursue in response to the human, in a qualitative study games to the video. The authors of the study did modeles answer the questions that would normally be made to the people, as well as the perception of the videolojërat as a form of art. Interestingly, the responses generated by the IT was perceived as a more authentic and human than the ones offered by the people. So, maybe it's the fact that HE is the most intelligent people in certain areas?
Agnew and bashkautorët of his identified the four of benefits, the main use of IT, to be constructed to the details:
– to speed up the search
-reduce costs
-minimize the risks to participants
-to increase the diversity of the data
This is the point of the latter is especially important, because IT can be simulojë the experience of our country, that they are reluctant to take part in the study of the real in the database sense.
However, research on their reaching the conclusion that these methods can compromise the core values of the human quest, such as the representation of the meaning and depth of the individuals to be studied. The main concern is that, regardless of the presentation, the models and HE didn't have the understanding to the truth of the human. Or at the very least it will be until it reaches the intelligence, artificial general.
Discussion about this post