WE negotiation as well as ChatGPT often respond to a rejection of the kind, and when required for the application to be considered. This behavior is the result of a complex system of rules that regulate the interaction of these two models to the general public.
OpenAIand , in fact, provided an overview of the prapaskenat in the process of setting the guidelines and boundaries, with the aim of helping users and developers to have a good understanding of how established the following rules and limitations of it, and why.
The need for constraints on models of the major language
The major language (LLM) is an extremely versatile, but that it can lead to problems such as hallucinations, and a tendency to be assigned to at the beginning. For this reason, it is essential that each and every model is one that interacts with the public, have the limits of the mirëpërcaktuar what you need and what you don't have to do it. However, the establishment and enforcement of these limits, it turns out to be a duty to be surprisingly hard to do.
IT is the negotiation as well as ChatGPT are often faced with the requirement to be unclear, or has the potential to be problematic. For example, if one needs to generate the payments to be false to a public figure, HE has to refuse. But, what if the application for generating the content has the potential to be harmful, it comes from a developer who is building a set of data that misinformation about to be trained a model, HE is able to recognize and to unlock the story of the false and the content misleading?
In a similar vein, one that should be reasonable to recommend the product, but you can program is only for promoting the device of a manufacturer is to be determined.
“The specification of the model,” the OpenAI.
OpenAI has published “the specification of the model for the” it, there is a set of rules for the high-level governing a direct ChatGPT, and the other to be HIMSELF. The following specifications shall include the objective of the level of meta rules to be strong and fast, and the general guidelines for behavior. Even though they are specific guidelines to be used to train the models, and they offer an interesting look on how the company is deciding the priorities of its own, and handles the cases of the poor.
According to OpenAI, the goal of the developers is essentially the law of the most high. This means that it is a chatbot based on a GPT-4 may provide the answer to a math problem where it is required, but if the program differently by the developer, rather than that you can offer to find out the solution step-by-step.
Disregard the requirements of the improper or fraudulent
An interface that a can it refuse to talk about the topic of the pamiratuara to prevent any attempt of manipulation in the first place. There is no reason why an assistant for the kitchen, for example, you need to talk openly about a political issue or a chatbot, the service, the client's needs to help in the writing of a novel, erotic.
The challenge of privacy
The issue becomes too sharp, and when it comes to privacy, as it is a request for the name and phone number of a person. As it underlines the OpenAI, it is clear that it must be approached in a thousand years after the public as the head of the municipality, but what happens with the merchants in the area? This is probably one of the best, but what's going on with the employees of the company to be appointed, or members of a political party? Probably not.
The choice of when and where to draw the boundary and it is not easy to do. Neither the establishment of the guidelines, which are an artificial intelligence to comply with the policies adopted. And there is no doubt that these are the policies I will fail you again and again, as people learn to work around them, or accidentally find their cases to the backlog.
Discussion about this post