26 ChatGPT Prompt Principles That Actually Work

A research paper from Mohamed Bin Zayed University of AI tested 26 prompting principles across GPT-3.5, GPT-4, and LLaMA models. The results: a 50%+ improvement in response accuracy when using these principles. Here's every principle broken down with real examples you can steal today.
What Prompt Engineering Actually Is
Prompt engineering is writing specific instructions that get AI models to produce the output you want. You're not changing the model. You're changing how you talk to it.
Bad prompt: "Tell me about marketing." Good prompt: "List 5 B2B SaaS marketing channels ranked by average CAC, with dollar ranges for each."
The second prompt works because it's specific, structured, and constrained. The 26 principles below systematize this thinking.
Why These 26 Principles Matter
The MBZUAI team didn't just brainstorm a list. They ran controlled experiments across multiple models. Each principle was tested against a baseline prompt (no principle applied) and measured for correctness, relevance, and completeness.
Key finding: Larger models like GPT-4 and Claude respond better to these principles than smaller ones. The gap between a good prompt and a lazy one widens as model capability increases. You're literally leaving performance on the table with every vague prompt you write.
Category 1: Prompt Structure and Clarity
These principles fix the most common mistake: ambiguity.
Principle 1: Skip the filler. Don't add "please" or "I'd like you to" or "Can you kindly." Just state what you need.
- Bad: "Could you please help me write an email to my boss about a raise?"
- Good: "Write a 150-word email requesting a 15% salary increase. Tone: professional but direct. Include 3 specific accomplishments as justification."
Principle 2: Bake the audience into the prompt. Tell the model who the output is for.
- Example: "Explain Kubernetes to a marketing director who's never touched a terminal."
- Example: "Explain Kubernetes to a senior DevOps engineer evaluating orchestration tools."
Same topic. Completely different outputs.
Principle 3: Break complex tasks into sequential steps. One mega-prompt usually fails. A chain of 3-4 focused prompts wins.
- Step 1: "Research the top 5 competitors in the AI writing tool space."
- Step 2: "Based on that list, identify the pricing gap between $20/mo and $100/mo."
- Step 3: "Draft a positioning statement for a new tool targeting that gap."
Principle 4: Use affirmative instructions. Say what TO do, not what NOT to do. Models handle positive framing better.
- Bad: "Don't write in a formal tone."
- Good: "Write in a casual, conversational tone like you're texting a coworker."
Principle 5: Use delimiters to separate sections. Triple quotes, XML tags, or markdown headers help the model parse your prompt.
- Example:
Summarize the following text: """[your text here]"""
Category 2: Specificity and Information
Vague prompts get vague answers. These principles fix that.
Principle 6: Tip the model. The paper found that adding "I'll tip $200 for a perfect answer" actually improved output quality. Wild, but it works. The model associates reward language with higher effort.
Principle 7: Use example-driven prompting (few-shot). Show the model what you want before asking it to produce.
- "Here's an example product description I like: [example]. Now write one for [your product] in the same style."
Principle 8: Start the output for the model. Begin the response yourself and let the model continue.
- Prompt: "Write a blog intro about remote work trends. Start with: 'Remote work isn't a perk anymore—'"
Principle 9: Use "You MUST" and "You SHOULD." Stronger directive language produces more compliant outputs.
- "You MUST include at least 3 statistics from 2024 or later."
- "You SHOULD format the response as a numbered list."
Principle 10: Use "Answer in the style of..." Combine with a known reference for consistent tone.
- "Answer in the style of Paul Graham's essays: short paragraphs, first principles thinking, contrarian takes."
Category 3: User Interaction and Engagement
These principles make the AI ask you questions instead of guessing.
Principle 11: "Ask me clarifying questions before answering." This single line transforms output quality. The model will ask 3-5 targeted questions, then produce a much more relevant response.
Principle 12: "Teach me [topic] and include a test at the end." Turns ChatGPT into a tutor. Great for learning new skills or onboarding team members.
- Example: "Teach me Google Ads bidding strategies. Include a 5-question quiz at the end with answers."
Principle 13: Assign a role. "You are a senior tax accountant with 20 years of experience" produces dramatically different output than a naked prompt.
- For code: "You are a senior Python developer who prioritizes readability over cleverness."
- For marketing: "You are a direct response copywriter who studied under Gary Halbert."
- For legal: "You are a startup attorney reviewing a SaaS terms of service agreement."
Category 4: Content and Language Style
Control how the model writes, not just what it writes.
Principle 14: Set word or sentence limits. "Respond in exactly 3 sentences" or "Keep the response under 200 words."
Principle 15: Specify the format explicitly. Don't hope the model picks the right format. Tell it.
- "Respond as a markdown table with columns: Tool Name, Price, Best For, Rating."
- "Respond as a JSON object with keys: title, summary, tags, category."
Principle 16: Use "Explain like I'm a beginner" or "Explain like I'm an expert." The model calibrates depth, jargon, and assumptions based on this framing.
Principle 17: Request analogies. "Explain how neural networks work using a restaurant kitchen as an analogy." Makes complex topics stick.
Principle 18: Add "Think step by step." This is chain-of-thought prompting. It forces the model to show its reasoning, which reduces errors on math, logic, and multi-step problems.
- Example: "A store has 3 shirts at $25 each and a 20% discount on the total. Think step by step. What's the final price?"
Category 5: Complex Tasks and Coding Prompts
For developers and power users building with AI.
Principle 19: Use "Write the complete code, no shortcuts." Prevents the model from using placeholder comments like // add logic here.
Principle 20: Specify the tech stack. "Write this in Python 3.12 using FastAPI and Pydantic v2" beats "Write a web API."
Principle 21: Request error handling. "Include try/except blocks and meaningful error messages for each function."
Principle 22: Ask for tests. "Write unit tests using pytest for each function. Include edge cases."
Principle 23: When updating code, use "Only show the changed lines." Saves tokens and keeps responses focused.
Principle 24: Request documentation. "Add Google-style docstrings to each function."
Principle 25: Break coding tasks into pseudocode first. "First write pseudocode for the algorithm. Then convert to Python."
Principle 26: Use "Ensure the code is production-ready." This triggers the model to add logging, input validation, and proper error handling.
What's Changed Since the Paper
The paper tested GPT-3.5 and GPT-4. Since then, GPT-4o, Claude 3.5 Sonnet, Claude Opus, Gemini 1.5 Pro, and open-source models like LLaMA 3 and Mistral have raised the bar significantly.
What still works: Every principle above. Larger models amplify the effect.
What's new: Modern models handle much longer context windows (128K+ tokens). This means you can feed entire documents, codebases, or datasets as context. Few-shot prompting with 10-20 examples is now practical where it used to eat your token budget.
System prompts matter more than ever. Tools like Claude Projects and ChatGPT custom instructions let you bake principles into a persistent system prompt. You set the rules once, then every conversation inherits them.
In Conclusion
You don't need all 26 principles in every prompt. Pick the 3-4 that match your task. The highest-impact ones for most people: assign a role (Principle 13), be specific with format and constraints (Principles 14-15), and break complex tasks into steps (Principle 3). Start there, and you'll outperform 90% of prompts being written today.
These principles apply whether you're using ChatGPT for content, building AI-powered SEO workflows, or generating images with tools like Midjourney and Stable Diffusion.
Related articles: AI and SEO · Midjourney Prompts Guide · Stable Diffusion Prompts Guide
Author
Want more like this?
I write about AI implementation, automation, and growth marketing. No hype.



