1. What Phase 2 Actually Is
shifts your approach from telling the model what to do to showing it what you want. You provide 2-5 examples of the input-output pattern you need, then let the model complete the next instance.
The mindset shift? You’re no longer writing instructions. You’re demonstrating a pattern. The model learns the structure, format, and logic from your examples, then replicates that pattern for new inputs.
This phase assumes you’ve mastered zero-shot clarity. If your instructions were already ambiguous, adding examples won’t fix the underlying problem. Research from Brown et al. at OpenAI on GPT-3’s few-shot capabilities showed that pattern quality matters more than pattern quantity. Three clean examples outperform ten inconsistent ones.
2. Core Goal of This Phase
Force adherence to specific formats (like JSON) and styles through demonstrated examples.
You’re done guessing whether the model will follow your formatting instructions. You show it the exact structure three times, and it replicates that structure for the fourth instance. This matters when you need consistent outputs for downstream processing, API integrations, or data pipelines.
Few-shot prompting solves the “it works sometimes” problem. Zero-shot might give you JSON 60% of the time. Few-shot with proper examples hits 95%+ consistency.
3. Key Skills You Must Master
4. Practical Examples
Example 1: Structured Data Extraction
Weak few-shot attempt:
Extract the key info from this email and format it.
Email 1: "Meeting on Tuesday at 3pm about Q4 planning"
Extracted: Tuesday 3pm Q4
Email 2: "Can you send the budget report by Friday EOD?"
Extracted: Budget report Friday
Email 3: "Lunch with Sarah next Monday at noon to discuss the merger"
Output:
Improved few-shot prompt:
Extract information from emails into JSON with keys: action, deadline, topic.
Input: "Meeting on Tuesday at 3pm about Q4 planning"
Output: {"action": "meeting", "deadline": "Tuesday 3pm", "topic": "Q4 planning"}
Input: "Can you send the budget report by Friday EOD?"
Output: {"action": "send report", "deadline": "Friday EOD", "topic": "budget report"}
Input: "Lunch with Sarah next Monday at noon to discuss the merger"
Output: {"action": "lunch meeting", "deadline": "Monday noon", "topic": "merger discussion"}
Input: "Review the security audit findings before tomorrow's board meeting"
Output:
Why it works: Consistent JSON structure across all examples. Same keys in the same order. Same data types. The model learns the exact schema and replicates it. The weak version has no consistent format, so the model guesses at structure.
Example 2: Persona-Driven Analysis
Weak :
Act as an expert and analyze this code for security issues.
Improved persona:
You are a security engineer specializing in OWASP Top 10 vulnerabilities and secure coding practices.
Analyze code for security issues following this structure:
- Vulnerability type (reference OWASP category)
- Severity (Critical/High/Medium/Low based on CVSS)
- Affected code section
- Remediation steps with code examples
Use technical precision appropriate for senior developers.
Why it works: Defines expertise domain (OWASP, secure coding), output structure, severity framework (CVSS scoring), and audience level. The weak version just says “expert” without specifying expertise type or output requirements.
Example 3: Context-Augmented Response
Without context ( risk):
What is our company's remote work policy?
With context:
Use only the information in the context below to answer the question.
If the answer is not in the context, state "Information not available in provided policy."
Context:
"""
Tech Jacks Solutions Remote Work Policy (Effective January 2025)
- Employees may work remotely up to 3 days per week
- Core hours: 10am-3pm local time for team collaboration
- VPN required for all remote connections
- Quarterly in-person meetings mandatory
"""
Question: What is our company's remote work policy?
Why it works: Provides the actual policy text, prevents hallucination with explicit instructions, includes the policy effective date for verification. Without context, the model might generate plausible-sounding but completely fabricated policy details.
5. Common Mistakes at This Phase
Inconsistent example formats. Showing three different output structures teaches the model that format doesn’t matter. Pick one schema and use it for every example.
Too many examples. Five examples usually work better than ten. Beyond five, you’re wasting tokens without improving . Research on few-shot learning shows diminishing returns after 5-7 examples.
Vague personas. “Act as an expert” tells the model nothing. “Act as a penetration tester analyzing web applications using OWASP testing methodology” specifies the expertise lens.
Context dumping without structure. Pasting ten pages of documentation with no guidance wastes tokens. Extract the relevant sections, use clear delimiters, and explicitly instruct the model to use only that context.
Conflicting instructions and examples. Your instruction says “use active voice” but your examples are written in passive voice. The model follows the examples, not the instructions.
6. How to Know You Are Ready for the Next Phase
If you’re still getting inconsistent formats, struggling to control response style, or seeing hallucinations despite providing context, stay in Phase 2. The next phase (reasoning strategies) assumes the model can already follow patterns reliably. Adding chain-of-thought prompting on top of inconsistent pattern-following just creates verbose inconsistency.
Few-shot prompting transforms unreliable outputs into predictable components. Every data pipeline, every API integration, every automated workflow depends on consistent structure. Master pattern demonstration before attempting complex reasoning chains.
Sources
Research Papers
- Brown, T., et al. (2020). Language Models are Few-Shot Learners. OpenAI. https://arxiv.org/abs/2005.14165
- Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Meta AI Research. https://arxiv.org/abs/2005.11401
- Wang, X., et al. (2023). Role-Play with Large Language Models. Stanford University. https://arxiv.org/abs/2305.14688
Official Documentation
- OpenAI. Prompt Engineering – Provide Examples. https://platform.openai.com/docs/guides/prompt-engineering/tactic-provide-examples
Standards & Frameworks
- FIRST. Common Vulnerability Scoring System (CVSS). https://www.first.org/cvss/
- OWASP Foundation. OWASP Top 10. https://owasp.org/www-project-top-ten/