In the previous segments, you worked with the Chat Completions API to generate text completions or single-turn conversations, which involves the API generating text based on a given input prompt. A multi-turn conversation typically consists of the user’s response and the assistant’s response in the form of an interactive chat. This allows developers to create compelling applications that engage and automate conversations.
In the next video, your SME will discuss OpenAI’s Chat Completions API for multi-turn conversations.
As discussed in the previous segments, the Chat Completions API requires the following three main roles:
- System: This instruction sets the overall behaviour of the assistant.
- User: The user role represents the end user using the chatbot.
- Assistant: The assistant role represents the chatbot.
The ChatCompletion code requires assigning the three roles/messages as inputs to the API – the system_message, the user message, and the assistant message. The ChatCompletion code syntax is:
Example
# Multi-turn coversations using Chat Completions API
chat_response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are an AI tutor that assists school students with math homework problems."},
{"role": "user", "content": "Help me solve the equation 3x - 9 = 21."},
{"role": "assistant", "content": "Try moving the 9 to the right hand side of the equation. What do you get?"},
{"role": "user", "content": "3x = 12"}
])
# extract only the text
print(chat_response.choices[0].message.content)Output
As discussed in the video above, the model performs reasonably well as a tutor but defining the role through system messages has its limitations. The model may lose the context and generate responses that may go against its pre-defined role.
The SME then illustrated a technique called few-shot prompting. It involves adding a few contextual examples to the GPT-3.5 model, enabling it to better understand the input prompt and generate a tailored response. By leveraging the additional content provided by the prompt sentence, the language model can provide smarter responses that are generalised across many different conversations. In the next video, your SME will illustrate this technique with some examples. The model can learn from the few-shot prompt examples provided to generate a unique response. Few Shot prompting can serve as an example for the model to understand the context of the task better and generate more relevant responses. However, the number of examples may vary based on the task at hand. An overview of the few shot prompting techniques can be understood from the image below.
In the upcoming video, your SME will demonstrate how few-shot prompt examples and a loop can be used to fine-tune the responses for the ‘Math Tutor’ example shown above.
As shown in the video above, the few-shot examples provided did indeed help steer the conversation. Few-shot prompting is an inexpensive and easy method to help the model generate outputs in the desired format. This technique will be covered in depth in the upcoming modules.
In the upcoming segments, you will learn how to evaluate the pricing aspect of OpenAI APIs.