Prompt Techniques for GPT
You need to know how to build the right prompts for ChatGPT and GPT-4 to get the best output. Here are some prompt engineering techniques you should know about.
Plan it out / take it step by step
Since GPT is an autoregressive model that can only predict the next word (based on the previous words), it can fail to reason / solve problems in one go.
When this happens try to get better performance by simply asking it to take it step by step and list the steps beforehand.
Once the steps are listed, try and prompt the AI to solve the problem again and see if you get better results.
Reflect on it's output and self-improve the prompt iteratively
If the output contains errors, respond and point out the issues. There is no need to provide solutions, simply explain why you are not happy with the output and ask the AI to reflect on the original prompt and rewrite it.
Provide the new generated prompt back into the AI interface and you should now be presented with a better output.
New frameworks are being developed to do this automatically.
It is also interesting that the LLM could determine whether its output was incorrect, without you having to point out the issues. The reason for this is that once it has it’s previous output included in the context of the next prompt it is very good a determining whether the context is correct or not.
It is the same for Humans - it is easier for us to read some text (or source code) and discriminate against it (criticise / provide feedback / find errors), than it is to write that same text from scratch.
Very long detailed prompts / system messages
Go into a lot of detail and provide us much context data as possible before asking your question. Include instructions on how to handle certain scenarios.
You can instruct GPT to take on a persona when answering (such as a lawyer, scientists etc.)
If you are using GPT-4 the best way to do this is to use the System Message parameter. This way the rules and context provided will be fed into the system with each new prompt.
Try explicitly asking the AI to:
only return information from a certain document
abide by a set of rules (for example a specific naming convention when coding)
be factual and helpful
act as an expert in a certain field
There is no guarantee that your system messages will be abided by, but it should help to steer the model accordingly.
Use Tools & Plugins
If all else fails (and you have access to ChatGPT plugins) try prompting to use specific plugins when it doesn’t know the answer.
For example prompt ChatGPT to search the web if it doesn’t have an answer, or to use the WolframAlpha plugin if it needs to calculate something. This might reduce hallucinations.
You will see ChatGPT iterating through different calls to these plugins to retrieve external data when necessary, this will help to ground the responses.
Read my article on using Vector Databases and Word Embeddings to provide LLMs with external context to support your prompts.