In this session, we explored various prompt engineering techniques commonly used for generating text completions from large language models (LLMs) such as ChatGPT.
In the first session, we discussed the importance of prompts for generative models. We examined the key features of a good prompt. A good prompt:
- Must contain clear, concise instructions.
- Provide adequate instructions about the task to be performed, the explicit role of the model, context of the task, guidelines, if any, and the desired output format.
- Delimiters to demarcate the model’s system messages from the input prompt.
We then explored various examples where generative models excel, including developing marketing campaigns by generating ad copies for a specific target market, generating code and applications (such as Flask applications and end-to-end ML models).
Lastly, we discussed various prompt engineering techniques such as:
- Few-shot prompting.
- Chain-of-thought prompting.
These techniques fall under the category of in-context learning. This allows LLMs that haven’t been explicitly trained on data, to generate highly specific outputs.
Chain-of-thought prompting technique guides the language model through a chain of thoughts, allowing it to generate text that is coherent and contextually appropriate. This approach can be used to generate text that is similar to human thought processes, and can, hence, be used for a variety of natural language processing tasks.
Few-shot prompting is a technique used to train language models to generate coherent and contextually appropriate text by providing them with a few examples of related text, rather than a large dataset of labeled examples. This approach allows the language model to learn to generate text that is relevant to the task at hand, without requiring a large amount of training data. We also looked at ‘few-shot-chain-of-thought’ prompting technique that combines both the paradigms mentioned above.