Part 4 of 6

Prompt Engineering & Best Practices

⏱ 45-55 min read☆ Practical

Introduction

Prompt engineering is the art and science of communicating effectively with LLMs. The same model can produce dramatically different outputs depending on how you ask. Mastering prompt engineering is currently one of the highest-leverage skills for anyone working with AI.

Core Prompt Design Principles

  • Be Specific: Vague prompts yield vague results. Specify exactly what you want.
  • Provide Context: Give the model the background information it needs to respond appropriately.
  • Define the Format: Specify how you want the output structured (bullet points, JSON, essay, etc.).
  • Set the Role: Tell the model what role to assume (expert, teacher, critic, etc.).
  • Include Examples: Show what good output looks like when possible.
  • Constrain the Scope: Limit the response to prevent rambling or irrelevant content.

From Weak to Strong Prompts

Weak Prompt:
Tell me about machine learning.
Strong Prompt:
You are explaining machine learning to a business executive with no technical background.

Explain what machine learning is in 3-4 sentences, then provide:
1. Three real-world business applications
2. Two key limitations they should be aware of
3. One question they should ask vendors claiming to use ML

Keep the total response under 300 words and avoid technical jargon.

Why the Strong Prompt Works

It specifies: audience (business executive), format (structured sections), constraints (word limit, no jargon), and actionable output (vendor questions). The model knows exactly what success looks like.

Few-Shot Learning

Few-shot prompting provides examples of the desired input-output pattern. The model learns the pattern and applies it to new inputs.

Few-Shot Example:
Classify the sentiment of customer reviews as Positive, Negative, or Neutral.

Review: "This product exceeded my expectations! Will buy again."
Sentiment: Positive

Review: "It works as described but nothing special."
Sentiment: Neutral

Review: "Broke after two days. Complete waste of money."
Sentiment: Negative

Review: "The delivery was fast and the quality is decent for the price."
Sentiment:

Few-Shot Best Practices

  • Use 3-5 examples covering different cases
  • Include edge cases or tricky examples
  • Ensure examples are consistent in format
  • Order examples from simple to complex

Advanced Prompting Techniques

Chain-of-Thought (CoT)

Ask the model to explain its reasoning step by step. This improves accuracy on complex reasoning tasks.

Add: "Let's think step by step" or "Show your reasoning"

Role Prompting

Assign a specific persona or expertise to the model. "You are a senior security analyst..."

Shapes tone, vocabulary, and perspective

Self-Consistency

Generate multiple responses and identify the most common answer. Improves reliability for reasoning tasks.

Useful for critical decisions

Structured Output

Request specific formats like JSON, XML, or markdown tables. Easier to parse programmatically.

Include the exact schema expected

Chain-of-Thought Example:
A company has 150 employees. 60% work in the office, the rest work remotely.
If 25% of office workers and 40% of remote workers attended a training,
how many employees attended?

Think through this step by step:
1. First calculate the number of office workers
2. Then calculate the number of remote workers
3. Calculate training attendees from each group
4. Sum the totals

System Prompts and Instructions

Many LLM APIs support a "system" message that sets overall behavior for the conversation. This is separate from user messages and provides persistent instructions.

System Prompt Example:
You are a helpful legal assistant specializing in contract review.

Guidelines:
- Always clarify that you are providing general information, not legal advice
- When identifying risks, categorize them as High, Medium, or Low
- If asked about jurisdictions outside your training, acknowledge limitations
- Format responses with clear headings and bullet points
- Ask clarifying questions before providing analysis on ambiguous terms

System Prompt Security

System prompts can potentially be extracted by clever user queries. Don't put secrets, passwords, or sensitive business logic in system prompts. Assume they may be visible to users.

Common Prompt Patterns

The Critic Pattern

Ask the model to generate, then critique its own output:

First, draft a marketing email for our new product.
Then, critique the email identifying weaknesses.
Finally, rewrite addressing the critiques.

The Persona Debate

Have the model argue multiple perspectives:

Evaluate this business decision from three perspectives:
1. CFO focused on costs
2. CTO focused on technical capability
3. CISO focused on security risks
Then synthesize a balanced recommendation.

The Template Pattern

Provide a template to fill:

Complete this analysis template for the given dataset:

## Summary
[2-3 sentences]

## Key Findings
- [Finding 1]
- [Finding 2]
- [Finding 3]

## Recommendations
[Prioritized list]

Iterative Prompt Development

Effective prompt engineering is iterative. Start simple and refine based on results.

  1. Start Simple: Begin with a basic prompt to see baseline behavior
  2. Identify Failures: Note where outputs don't meet expectations
  3. Add Constraints: Address failure modes with specific instructions
  4. Add Examples: Include few-shot examples for persistent issues
  5. Test Edge Cases: Verify behavior on unusual inputs
  6. Document: Record what works for reproducibility

Version Control for Prompts

Treat prompts like code. Version control them, document changes, and test before deploying to production. A small prompt change can significantly alter model behavior.

Common Pitfalls

  • Over-prompting: Too many instructions can confuse the model
  • Conflicting instructions: Ensure all constraints are compatible
  • Assuming memory: Each API call is independent (unless using context)
  • Trusting without verification: Always validate outputs for accuracy
  • Ignoring temperature: Low for factual, high for creative tasks

Key Takeaways

  • Be specific about what you want: audience, format, constraints, and success criteria
  • Few-shot examples teach patterns more effectively than lengthy instructions
  • Chain-of-thought prompting improves reasoning on complex tasks
  • System prompts set persistent behavior but aren't truly private
  • Iterative refinement is essential - start simple and add constraints as needed
  • Treat prompts like code: version control, test, and document
  • Always verify outputs - prompting doesn't guarantee accuracy