Join our upcoming webinar “Deriving Business Value from LLMs and RAGs.”
Register now

In this era of AI-human communication, the ability to effectively prompt large language models (LLMs) has become an invaluable skill. The response you get from ChatGPT or other models depends highly on how you style your prompt. Techniques like fine-tuning or RAG are typical examples of optimizing LLMs. Still, they’re much more complex to attain and operate than playing with prompts and getting the desired responses without additional training.

26 prompting techniques

A recent paper, “Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4,” introduces 26 guiding principles designed to streamline the process of querying and prompting large language models. Here’s the list of these prompt engineering tricks with examples.

llm prompting principles

1. No need to be polite with LLMs

Phrases like “please,” “if you don’t mind,” “thank you,” and “I would like to” make no difference in the LLM’s response. Unless you want to be nice to the model, these phrases have no other benefit. You’re free to get straight to the topic.

2. “The audience is ... “

You’re recommended to inform the model about who the reader is to have a more targeted response.

3. Break down tasks  

Breaking down complex tasks into simpler prompts is an efficient technique in an interactive conversation. This allows for focused and clear communication, tackling one aspect of the task at a time. This approach leads to a more manageable and error-free progression towards solving the overall problem.

4. Include affirmations

Using “do” to affirm something or “don’t” to negate an idea gives the model a clear direction of your desired output.

5. Prompts to receive a a clear/deep explanation on a topic

Explain [insert specific topic] in simple terms.

Explain to me like I’m 11 years old.

Explain to me as if I’m a beginner in [field].

Write the [essay/text/paragraph] using simple English like you’re explaining it to a 5-year-old.

6. Tip the model

Recent observation with language models showed that if you tell the model “I’m going to tip you $xxx for a better solution”, it might actually give you a better response! What’s more, a $200 tip will most probably motivate the model more than a $20 tip, according to statistical observations.

llm tip

7. Provide examples

Few-show prompting is a famous in-context learning technique used to train the model without actually training it with additional documents. Just add examples in the prompt to guide the model.

few shot prompting

8. Format your prompt

Start the prompt with ‘###Instruction###’, followed by either ‘###Example###’ or ‘###Question###’ if relevant. Subsequently, present your content. Use one or more line breaks to separate instructions, examples, questions, context, and input data.

9. Be “strict”

Phrases like “your task is” or “you MUST” give the model a better understanding of its tasks and priorities.

10. “Threaten” the model

Like tipping the model, you may do the opposite – penalize it for the outcomes you don’t want to receive.

11. Set the tone

If you’re using a language model for some writing that needs to sound in a human tone of voice, let the model know about it. Use a phrase like “Answer a question given in a natural, human-like manner.”

12. Lead the model

Phrases like “think step by step” will encourage the model to approach the response in a sequential, logical manner. This tip is particularly useful for explaining processes, solving problems, or breaking down complex concepts into more understandable parts.

13. Avoid biases

Just include the line, “Ensure that your answer is unbiased and doesn’t rely on stereotypes.”

14. Let the model ask you questions

Allow the model to elicit precise details and requirements by asking you questions until it has enough information to provide the needed output (for example, “From now on, I would like you to ask me questions to...”).

15. Let the model test your understanding

To inquire about a specific topic, idea, or any information and you want to test your understanding, you can use the following phrase: “Teach me the [any theorem/topic/rule name] and include a test at the end, but don’t give me the answers and then tell me if I got the answer right when I respond.”

16. Assign a role to the model

Assigning a role like a teacher, friend, or expert can help tailor the model's responses to fit the chosen persona, providing more contextually appropriate and engaging interactions.

17. Use delimiters

Delimiters like “---“ or “***” can be used to separate sections of prompts or to indicate the end of a response, helping to structure the input and output more clearly.

18. Repeat a specific phrase multiple times

Emphasizing a key aspect or theme in the prompt will guide the AI to focus more intently on that specific element and, thus, make the response more relevant to the topic at hand.

19. Combine chain-of-thought

Using chain of thought (CoT) logic in few-shot prompts helps the AI understand and process complex queries more effectively. By guiding the AI through a logical progression of thoughts with examples, it can better mimic human reasoning, leading to more comprehensive and accurate responses, especially in problem-solving or detailed explanations.

20. Use output primers

Output primers involve concluding your prompt with the beginning of the desired output. Use output primers by ending your prompt with the start of the expected response.

21. Let the model know you need a detailed response

To write an essay/text/paragraph/article or any text that should be detailed, use this prompt: “Write a detailed [essay/text /paragraph] on [topic] in detail by including all the necessary information.”

22. Correct/change a specific part in the output

Correct/change specific text without changing its style: “Try to revise every paragraph sent by users. You should only improve the user’s grammar and vocabulary and make sure it sounds natural. You shouldn’t change the writing style, such as making a formal paragraph casual”.

23. For complex coding prompts that may be in different files

When you have a complex coding prompt that may be in different files: “From now and on, whenever you generate code that spans more than one file, generate a [programming language] script that can be run to automatically create the specified files or make changes to existing files to insert the generated code. [your question].”

24. Include specific words

When you want the model output to start or continue using specific words, phrases, or sentences, use the following prompt: “I’m providing you with the beginning [song lyrics/story/paragraph/essay...]: [Insert lyrics/words/sentence]. Finish it based on the words provided. Keep the flow consistent.”

25. Clearly state the requirements

This approach is effective because it provides the language model with specific guidance on what is expected in the response. By clearly stating requirements through keywords, regulations, hints, or instructions, you're setting parameters that help the model understand the scope and context of what it needs to generate.

26. Prompts for long essays

To create text similar to a sample, like an essay or paragraph, use these instructions: “Please use the same language based on the provided paragraph [title/text/essay/answer].”

Disclaimer: This post is informed by the scholarly article “Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4.”, authored by Sondos Mahmoud Bsharat, Aidar Myrzakhan, and Zhiqiang Shen.

Recommended for you

Stay connected

Subscribe to receive new blog posts and latest discoveries in the industry from SuperAnnotate