You have learnt about some of the API parameters in the OpenAI API library, such as temperature, max_tokens, stop, frequency_penalty and presence_penalty while working with the Chat Completions API.
In this segment, we will cover some additional parameters covered in the Chat Completions API and their use cases. You may refer to the notebook here for their implementations.
These additional parameters are given below:
- response_format: This parameter specifies the format in which the model should give the output response. For example, the setting { “type”: “json_object” } enables the JSON mode, which guarantees that the message the model generates is in the valid JSON format.
- top_p: The top_p parameter is a sampling technique that serves as an alternative to temperature, where only the tokens with the top_p probability mass are considered. For a top_p value of 0.1, only the tokens with the top 10% probability mass will be considered.
- seed: The seed parameter ensures the model provides a deterministic output; repeated requests with the same seed value will return the same results. The seed parameter takes in integer values, such as 123 (any integer value). These values serve as the context to keep the outputs consistent provided other parameters such as temperature and model are also kept the same across the requests.
Note
The seed feature is in the beta phase and is currently available only in gpt-4-1106preview and gpt-3.5-turbo1106 models.
- logprobs: This parameter returns the log probabilities of the output tokens. If this parameter is set to true, the model returns the log probabilities of each output token in the content key of the output. The log probabilities of output tokens indicate the likelihood of each token occurring in the sequence given the context. In simple terms, a logprob is log(p), where p is the probability of a token occurring at a specific position based on the previous tokens in the context.
Higher log probabilities suggest a higher likelihood of the token matching in that context. This allows users to gauge the model’s confidence in its output or explore alternative responses that the model considered. Logprob can be any negative number or 0.0, and. 0.0, corresponds to 100% probability.
Note
This option is available in all the models except the gpt-4-vision-preview model.
- top_logprobs: This is an integer between 0 and 5, specifying the number of the most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
You can check your understanding of these parameters with the questions below.