GPT 4o, the new ChatGPT, was the protagonist of the research carried out and published by the OpenAI, in relation to the level of risk. This model was released in march, introducing the possibility of multiple users, who can use it to generate the results multimodale made in text, audio, images, or video. GPT-4o is also distinct from the former, for its ability to recognize and interpret emotions and facial expressions of the user, in order to give you the answers right. It is exactly this feature in the past it would have taken a team of experts to assess the level of risk according to the model of the new OpenAi.
OpenAi: ChatGPT-4o there is a level of risk to the average
Also, prior to the release of a model of its own, OpenAi will ngarkonte a group of experts to assess what the risks are. A few months later, the results of the research have been published by the OpenAi, which is classified GPT-4o as a model for the risk of secondary education.
To be evaluated will be the problems, such as the ability to ChatGPT-4o) can be used in an illegal way from the user to create clones with no authorization; using the clips and audio to be fair, the author, or to create content, erotic and violent. In addition, a characteristic that seems to have sparked quite a debate would be the ability to directions in the text of the developed model, which, even though minimal, it would be very effective.
The analysis carried out by the experts, in fact, it takes into account the average of the calculated results obtained from the model into four categories: cybersecurity, threats to the biological side, and the autonomy of the model. Two of the categories, and you will have a degree of risk to be low, it was classified as a medium risk.
OpenAi's, but you can make it to hold a pose transparent to the user. In particular, if passed, the law proposed by the legislature of California, in connection with the obligations of the company in order to test the model of their own to IT before they put it at the disposal of the users in order to ensure greater security.
Discussion about this post