IKH

Engineering a Good Prompt

In the previous session, you learned that prompts are essential for working with LLMs. So, what makes a good prompt?

In the upcoming video, Kshitij will answer this question by listing the essential features of a good prompt.

As mentioned in the video, clarity of instructions is crucial. Adding sufficient details to the prompt gives the LLM the necessary context to generate the desired output. Using concise and unambiguous language helps guide the model’s understanding of the user’s intentions.

As shown in the video, a good prompt ideally contains the following information to provide an effective output. It serves as instructions to the model and ensures that the output aligns with the intended format:

  • Explicitly state the task to be performed to get the required output.
  • Role of the language model for performing the task.
  • Context in which the task must be done.
  • Guidelines on how to perform the task.
  • The desired output format of the ideal response.

The more specific and explicit your instructions are (the task, the role of the language model, the context, the guidelines, and the desired output), the better the model’s ability to meet your expectations. Using this framework ensures consistent performance from the model. A prompt written with clear syntax and a well-defined framework outperforms a poorly written prompt by a significant margin.

One way to define a clear and coherent prompt that ensures a consistent model output is to use separators and delimiters to differentiate the key identifiers in the prompt. Delimiters are special tokens that help the model parse information more accurately. If delimiters are incorporated, the LLM models gain a deeper understanding of the desired structure and format of the output, resulting in enhanced coherence and readability of the generated text. Some commonly used delimiters in the prompt engineering community include:

Example

Python
Triple quotes: """

Hashtags: ####

Triple back-ticks: ```

Triple dashes: ---

Angle brackets: < >

XML tags: <tag></tag>

Output

You can use these delimiters to separate the model instructions (such as role, context, guidelines and output format) from the actual prompt to the model. In a typical prompt, you can use delimiters in the following manner:

  • Separating input and output instructions.
  • Demarcating the system message from the rest of the instructions.
  • Separating conditional statements.

They can also be useful in demarcating different sections within the prompt, allowing for:

  • More precise instructions.
  • Greater flexibility and control .
  • Enhancing the model’s understanding of the prompt’s context and desired output.

In the next video, we will explore the next technique to engineer better prompts.

As mentioned in the video, the second technique to engineer better prompts is to condition the prompt for optimal performance. For instruction-tuned LLMs, like GPT-3.5 and ChatGPT, it is essential to be clear and direct about the output requirements by providing a role. When a prompt specifies a particular role or expertise, such as ‘You’re a mathematics expert with a high IQ’ or ‘You’re a leading marketing and advertising expert’, it provides additional contexts and constraints for the language model to generate more relevant and accurate responses.

Here are a couple of reasons why role-based prompts are effective:

  • Narrowing down the domain: LLMs understand the statistical nature of languages but lack inherent knowledge or understanding of specific domains or concepts. By specifying an area of expertise, the prompt restricts the model’s response generation to that particular domain, focussing its attention on just the relevant information and filtering out irrelevant or off-topic responses.
  • Guiding response generation: Role-based prompts can direct the model’s thinking and reasoning towards the desired area of expertise. For example, the prompt may imply that the response should involve mathematical concepts, formulas or problem-solving strategies.
  • Setting user expectations: Role-based prompts help establish clear expectations for users regarding the type of response they can anticipate. In the above example, the model will provide answers from the perspective of a mathematics expert, possibly with advanced knowledge and reasoning abilities.
  • Leveraging general knowledge: While language models lack specific expertise, they have access to vast amounts of general knowledge. Role-based prompts can trigger the model to recall and utilise relevant information from its training data, which may include mathematical principles, theorems or problem-solving techniques.

However, it is important to note that while role-based prompts can improve the relevance and quality of responses in specific domains, they do not make the model an actual expert. The model’s answers are still based on patterns it learned from text and therefore, may not always be accurate or reliable.

Now that you have understood what makes a good prompt, let’s delve into the term ‘prompt engineering’, which has recently gained popularity. Prompt engineering can be defined as the art and science of designing better prompts to get better, accurate and coherent outputs. You will learn the various techniques used in prompt engineering in the upcoming segments.

Before we board the prompt engineering train, here are a couple of pointers to keep in mind:

  • Understanding how LLMs function along with their architectures and training processes — as you have learnt in the previous module — will enable you to craft the ideal prompt for your task.
  • While adding more details to the prompts help guide the language model better, you also need to be wary of the context window. Context window is essentially the maximum sequence length an LLM can process. Recall the OpenAI models you learnt about previously and the maximum tokens associated with them. But sometimes having longer prompts that utilise the entire context window, take longer eomputation times and don’t necessarily boost performance.
  • When working with a particular domain, you must develop a deep understanding of the domain to create prompts that align with the intended outcomes and objectives.
  • You should experiment with various parameters and configurations to refine prompts and optimise the model’s performance for specific tasks or domains,
  • The model’s output must be iteratively evaluated and rephrased to enhance its quality and relevance,

Additional Readings

  • This resource provides more information about prompting basics: Basies of Prompting.
  • OpenAI has curated a list of best pracices while using GPT in the following article: GPT: Best practices | OpenAI.

Report an error