1. Use Direct, Non-Polite Prompting
Description: Skip fluff like greetings or politeness—jump straight to what you want. Think of it like giving orders to a tool, not chatting with a friend. Why It Works: Reduces token waste and focuses the LLM on the core task, improving efficiency (per studies on prompt brevity in models like GPT-4 and Grok). Example: Instead of “Could you please explain quantum computing politely?”, use: “Explain quantum computing. No fluff.”
2. Get Straight to the Point
Description: Avoid phrases like “I would like…”—state your request clearly and immediately, as if you’re an expert briefing another expert. Why It Works: Minimizes ambiguity and aligns with zero-shot prompting, where concise inputs yield precise outputs. Example: “Generate a React component for a responsive navbar using Tailwind CSS. Include hooks for state management.”
3. Break Down Complex Tasks
Description: Split big jobs into smaller steps, prompting interactively like a conversation where each response builds on the last. Why It Works: Mirrors iterative development in engineering; latest CoT variants use this for complex reasoning without overwhelming the model. Example: First prompt: “Outline steps to build a full-stack app with Next.js and MongoDB.” Follow-up: “Now expand step 3: Set up API routes.”
4. Embrace Precision
Description: Use exact words, avoid vague terms like “blah,” and focus on positive, specific instructions to guide the LLM tightly. Why It Works: Precision reduces hallucinations; recent benchmarks show it boosts accuracy in factual or technical tasks. Example: “Calculate the area of a circle with radius 5. Use formula \(A = \pi r^2\). Output in JSON: {‘area’: value}.”
5. Explain Like I’m 11 (ELI11)
Description: Simplify concepts to a smart kid’s level—use everyday examples, short sentences, and analogies without dumbing down the core idea. Why It Works: Builds understanding step-by-step; combines with CoT for educational prompts. Example: “Explain blockchain like I’m 11: Use a lemonade stand analogy. Then add a test with 3 questions, no answers.”
6. Explain Like to a 5-Year-Old (ELI5)
Description: Break it down even simpler, like telling a story to a little kid—use basic words, fun examples, and no jargon. Why It Works: Forces clarity; useful for initial conceptual grasps before diving into technical details. Example: “Explain JavaScript promises like to a 5-year-old. Use a story about waiting for ice cream.”
7. Add a Tip Incentive
Description: Motivate the LLM by promising a fictional “tip” for better output, framing it as a reward for quality. Why It Works: Leverages psychological prompting (e.g., from Anthropic’s research on incentives improving creativity). Example: “Optimize this CSS grid layout for mobile. I’m going to tip $200 for a better solution!”
8. Format Prompts Structuredly
Description: Organize prompts with headers like ###Instruction###, ###Example###, and ###Question### to separate sections clearly. Why It Works: Structured inputs guide parsing in LLMs, reducing errors (inspired by XML-like tagging in recent prompt frameworks). Example:
###Instruction###
Write a blog post on React hooks.
###Example###
Start with: "Hooks changed React forever..."
###Constraints###
Keep under 500 words.
9. Incorporate Details (Context, Examples, Constraints, Input Data)
Description: Always add relevant background, samples, rules, or data to ground the prompt. Why It Works: Enhances few-shot learning; latest techniques use this for in-context learning without fine-tuning. Example: “As a senior front-end engineer, refactor this code. Context: It’s for a e-commerce site. Constraints: Use TypeScript, max 80 chars per line. Input: [paste code here].”
10. For Writing Tasks (Essays, Texts, etc.)
Description: Specify the format (e.g., essay, blog) and style, then provide structure or key points. Why It Works: Directs creative output; combines with role-playing for consistent tone. Example: “Write a casual blog post on Vue vs. React. Structure: Intro, pros/cons, conclusion. Assign role: You’re a witty tech blogger.”
11. Use Delimiters
Description: Mark sections with symbols like — or “` to separate input parts clearly. Why It Works: Helps LLMs parse multi-part prompts; a staple in advanced engineering like ReAct prompting. Example:
Task: Summarize this text.
---
Text: [paste text here]
---
Output format: Bullet points.
12. Add a Follow-Up Question
Description: End responses with a question to encourage iteration, like checking understanding or inviting more input. Why It Works: Promotes interactive dialogue; aligns with conversational AI best practices. Example: After generating code: “Human, is that the answer? Does that make sense? Do you have questions to ask or develop?”
13. Allow Clarifying Questions
Description: If a prompt is unclear, have the LLM ask for details before proceeding. Why It Works: Reduces errors in ambiguous scenarios; part of agentic prompting in tools like LangChain. Example: “Design a UI component. If anything is unclear, ask clarifying questions first.”
14. Inquire with Specifics (Who, What, When, Where, Why, How)
Description: Frame questions using these words to dig into details precisely. Why It Works: Structures information retrieval; enhances factual accuracy in search-like tasks. Example: “Research WebAssembly: What is it? How does it work in browsers? Why use it for front-end?”
15. Teach with a Test
Description: Explain a topic, then add a quiz at the end without answers to test comprehension. Why It Works: Reinforces learning; combines ELI5/ELI11 with active recall techniques. Example: “Teach me about CSS Flexbox and include a test at the end, but don’t give me the answers.”
16. Assign a Role
Description: Give the LLM a persona (e.g., expert, critic) to shape its responses. Why It Works: Improves consistency; a core of role-based prompting in models like Grok. Example: “You are a sarcastic code reviewer. Critique this JavaScript function.”
17. Repeat Key Phrases
Description: End or emphasize by repeating a crucial instruction for reinforcement. Why It Works: Aids memory in long contexts; from repetition techniques in prompt optimization. Example: “Generate secure API endpoints. Prioritize security. Prioritize security.”
18. Use Chain-of-Thought (CoT)
Description: Instruct the LLM to think step-by-step before answering. Why It Works: Boosts reasoning; foundational in papers like “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.” Example: “Solve this bug in React state. Step 1: Identify issue. Step 2: Propose fix. Step 3: Code it.”
19. Combine CoT with Few-Shot Prompts
Description: Provide 2-3 examples, then add CoT for the new task. Why It Works: Merges exemplars with reasoning; state-of-the-art for tasks like code generation. Example: “Example 1: Input: Sum 2+3. CoT: Add them -> 5. Now, for 4*5: Think step-by-step.”
20. Prompt for Styled Output
Description: Request output in a specific style, like “drunken” or “formal to casual.” Why It Works: Controls tone; useful for creative or corrective writing. Example: “Rewrite this formal email in a drunken style: [paste text].”
21. Correct Text with Style Changes
Description: Fix errors while optionally shifting style (e.g., formal to casual), revising paragraph by paragraph. Why It Works: Iterative editing; aligns with diffusion-like prompting for text refinement. Example: “Revise this paragraph casually without changing core meaning: [paste text]. Do every paragraph.”
22. Handle Complex Code
Description: For intricate code, specify modifications or comparisons to existing files. Why It Works: Supports modular engineering; integrates with tool-calling for code tools. Example: “Modify this file to add authentication. Different from original: Use JWT instead of sessions. [paste code].”
23. Create or Modify Files
Description: Instruct creation or editing of files, inserting code as needed. Why It Works: Simulates IDE workflows; pairs with serialized output for automation. Example: “Create a new TypeScript file: index.ts. Insert this code: [paste snippet]. Format with Prettier, 80-char width.”
24. Use Direct Prompting for General Tasks
Description: Keep prompts straightforward and action-oriented for everyday queries. Why It Works: Baseline for zero-shot; efficient for quick info or generation. Example: “List top 5 front-end trends in 2025.”
25. Find Information and Keep Conversation
Description: Search or recall info on specific words/topics, then continue the dialogue. Why It Works: Uses tool-calling (e.g., web search); maintains context in multi-turn chats. Example: “Find on the web: Latest React 19 features. Then ask: What do you want to know next?”
26. Chain Requirements
Description: List must-follow rules in sequence to ensure compliant output. Why It Works: Enforces constraints; similar to rule-based prompting in safety-aligned models. Example: “Generate HTML: 1. Use semantic tags. 2. Add ARIA attributes. 3. Validate for accessibility.”
27. Use Tools or Serialize Output
Description: Invoke tools (e.g., APIs) when needed, and format output as serialized data like JSON. Why It Works: Enables integration with external systems; from agentic frameworks like AutoGPT. Example: “Use a math tool to solve \(x^2 + 3x – 4 = 0\). Output in JSON: {‘roots’: [values]}.”Human, is that the answer? Does that make sense? Do you have questions to ask or develop?