PROMPTING GUIDE — UPDATED MARCH 2026

The Prompting Guide

From Search Bar to Strategic Partner

Stop Googling. Start engineering. A practical guide to getting dramatically better results from AI.

Everyone has access to the same AI. Not everyone gets the same results.
The difference isn't the tool — it's how you talk to it.
Prompting is the highest-leverage skill you can develop right now.

This playbook will take you from typing questions into a search bar to engineering conversations with a strategic partner. Every technique is research-backed, battle-tested, and designed for people who use AI to get real work done.

Contents

Google answers questions. AI completes tasks. The shift is from 'finding information' to 'producing work.' AI is built to turn raw input into finished output.

The Fundamental Difference

Google is a lookup tool. Ask it a question, get links to answers. AI is a production tool. Give it a task, get finished work.

Google Search Basic AI Prompt Engineered Prompt
"best project management tools 2026" "What are the best project management tools?" "You are a senior ops consultant. Compare the top 5 project management tools for a 12-person startup. Evaluate on: pricing, integrations, learning curve, and remote team features. Output as a comparison table with a final recommendation."
1.1

The Mindset Shift

AI is a capable new hire on their first day. Brilliant but lacking YOUR context. Your job isn't to ask better questions. Your job is to give better briefs.

Why This Matters

Most people waste AI because they search-engine it. "Best email templates." "How do I write a proposal?" These get generic answers. Instead, load the prompt with specificity: role, constraints, examples, output format. The more context you give, the better the work.

1.2

Exercise: Rewrite Your Last Searches

Take your last 3 Google searches. For each one, rewrite it as an AI prompt. Start with: "You are a [specific expert]. I need you to [task]. Here's the context: [what I'm doing, who it's for, what success looks like]. Output as [format]."

🧠 Prompt Coach

Get feedback here, then paste your improved prompt into Claude to see the difference.

Ready to learn the core framework? Continue to Section 2.

ROLE tells AI who to be. TASK tells AI what to do. OUTPUT tells AI how to deliver. This is the minimum. Everything else is enhancement.

The RTO Framework

RTO stands for Role-Task-Output. It's not revolutionary, but it's reliable. Every prompt you write should include these three elements:

ROLE: You are a [specific expert] with experience in [domain]. TASK: [Specific action verb] + [clear scope] + [constraints]. OUTPUT: Format as [structure]. Include [requirements]. Keep to [length].

Why It Works

When you specify a role, you activate a whole pattern of knowledge in the AI model. The AI was trained on text from countless perspectives. By saying "You are a senior recruiter," you're lighting up that specific region of the model. Then you tell it exactly what to do. Then you tell it how to package the answer. Done.

Good vs Bad Examples

Bad: Vague and Generic

"Help me with my resume"

Missing: role, specificity, output format. This will generate okay advice but nothing tailored to your situation.

Good: Specific and Actionable

"You are a senior recruiter at a Fortune 500 tech company. Review my resume. Identify the 3 weakest bullet points and rewrite each using the STAR method. Format as a before/after table."

2.1

Breaking Down the Good Example

ROLE: Senior recruiter at Fortune 500 tech — this activates expertise-specific patterns

TASK: Review resume, identify 3 weakest points, rewrite using STAR — this is concrete and bounded

OUTPUT: Before/after table — this controls the format and makes comparison easy

2.2

Exercise: Write 3 Prompts Using RTO

Pick 3 tasks you'll do this week. For each one, write a complete RTO prompt. Don't overthink it. Just fill in the three blanks. Get comfortable with the template.

🧠 Prompt Coach

Get feedback here, then paste your improved prompt into Claude to see the difference.

Once you've mastered RTO, the frameworks in Section 3 will make much more sense.

RTO is simple and effective for most tasks. But there are other frameworks designed for specific challenges. Here are the ones that actually move the needle:

Framework Structure Best For
RTO Role → Task → Output Quick tasks, daily use
RISEN Role → Instructions → Steps → End goal → Narrowing Complex multi-step work
COSTAR Context → Objective → Style → Tone → Audience → Response Marketing & content
RISE Role → Input → Steps → Expectations Executive briefings
CRAFT Context → Role → Action → Format → Tone Strategic communications
Chain-of-Thought "Think step by step" Analysis & reasoning
Tree-of-Thoughts Explore 3 approaches, compare, pick best Strategic decisions
Few-Shot Show 3-5 examples of desired output Consistent formatting

RISE & CRAFT for Executive Work

If you're briefing AI for board-level or investor-facing work, RISE (Role → Input → Steps → Expectations) and CRAFT (Context → Role → Action → Format → Tone) add explicit expectation-setting. Both come from executive prompting playbooks and force you to define what "good" looks like before the AI starts working.

RISEN for Complex Multi-Step Work

If your task has multiple stages, RISEN gives structure. Role, specific instructions, numbered steps, the end goal, and narrowing criteria to refine the output.

ROLE: You are a product strategist with 8 years at Series A-C startups. INSTRUCTIONS: - Use only public information available before March 2026 - Ask clarifying questions if data is missing - Prioritize actionable insights over theoretical frameworks STEPS: 1. Analyze the competitive landscape 2. Identify 5 key positioning opportunities 3. Evaluate feasibility of each 4. Recommend the top 2 with reasoning END GOAL: A one-page positioning brief that we can share with our board NARROWING: Focus on markets where we have existing traction. Exclude enterprise-only plays.

COSTAR for Marketing & Content

When writing for an audience, COSTAR forces you to think through context (the situation), objective (what you want to happen), style (voice), tone (emotional temperature), and audience (who reads this). The output is tighter because you've removed ambiguity.

Don't memorize all these. Master RTO first. Then add one framework at a time based on what you actually need. COSTAR is gold for content. RISEN is gold for strategy. Most of your work will use RTO.

Chain-of-Thought for Analysis

Sometimes the best thing to add to a prompt is: "Think step by step." This slows down the AI's reasoning and reduces errors. Try it on any task involving judgment, math, or logic.

3.1

Exercise: Same Task, Different Frameworks

Pick a task you need to do. Write it using RTO. Then rewrite it using COSTAR. Run both. Compare the outputs. You'll see why framework choice matters.

Frameworks give you structure. Persona engineering fills in the details.

Why persona engineering works: AI models were trained on text from many perspectives. Specifying a detailed role activates the relevant patterns in the model's weights.

The Enhancement Ladder

Good personas aren't vague. They're specific. Here's how to build them:

4.1

Basic Persona

"You are a marketer"

This is too generic. Marketers vary wildly. You'll get generic output.

4.2

Better Persona

"You are a B2B SaaS marketing director"

Better. Now the model knows the domain (B2B SaaS) and the level (director). Output will be more relevant.

4.3

Best Persona

"You are a B2B SaaS marketing director with 10 years experience at companies scaling from $2M to $20M ARR. You specialize in content-led growth and have a bias toward data-driven decisions over hunches."

Now you're cooking. The model has a clear picture. Experience range, specialization, decision-making style. Output will be specific and sharp.

Persona Stacking

You can combine personas. This is powerful for creative work or strategic analysis:

"You combine the analytical rigor of a McKinsey consultant with the creative instincts of an award-winning copywriter."

Now the output has both structure and flair. Persona stacking works best with 2-3 combinations. More than that and you'll confuse the model.

Anti-Pattern: The Useless Persona

Avoid: "You are a helpful assistant." That's the default. It adds zero information. Instead, go specific: "You are a developer with 8 years building infrastructure at Stripe." Now you have signal.

4.4

Exercise: Build 3 Expert Personas

Think of 3 experts you wish you had access to. For each, write a detailed persona: title, years of experience, domain, specialization, decision-making style. These become templates you reuse.

🧠 Prompt Coach

Get feedback here, then paste your improved prompt into Claude to see the difference.

Good personas matter. But they only work if the task is equally well-defined. Section 5 covers task precision.

A great persona with a vague task still produces vague output. You need to define the task clearly.

The Specificity Spectrum

5.1

Vague

"Help me with my business strategy"

5.2

Better

"Create a go-to-market strategy for our new product"

5.3

Specific

"Create a 90-day go-to-market strategy for our new feature (AI-powered contract analysis). Our ICP is in-house legal teams at mid-market companies (500-5000 employees). We have no case studies yet. Budget: $50k. Constraints: no enterprise sales team. Output: a phased timeline with specific milestones and success metrics for each phase."

Task Enhancement Checklist

  • Action verb is specific (analyze, compare, draft, evaluate — not "help with")
  • Scope is bounded (which data? what timeframe? how many items?)
  • Constraints are stated (word count, audience, tone, budget)
  • Success criteria defined (what does "good" look like?)
  • Edge cases addressed (what if data is missing? what to do then?)
The word "help" is a prompt killer. Replace it with a specific verb: analyze, compare, draft, evaluate, summarize, prioritize, rewrite, extract, design, build, audit, propose.

Good vs Bad Task Specification

Bad: Too Vague

"Help me improve our onboarding"

Good: Specific

"Audit our current 7-day onboarding for enterprise software. Identify the top 3 drop-off points. For each, propose a specific intervention. Format as a prioritized list with estimated effort to implement."

5.4

Exercise: Add Precision to an Old Prompt

Find a prompt you've used before. Rewrite it with: specific action verb, clear scope boundaries, stated constraints, and success criteria. Run it. See the difference.

🧠 Prompt Coach

Get feedback here, then paste your improved prompt into Claude to see the difference.

Good persona + good task = good foundation. Section 6 covers controlling the output format itself.

Even with a good role and task, the AI still has choices about format. Your job is to remove those choices.

Output Control Techniques

Structure

"Format as a table with columns for..." or "Structure this as an executive summary (1 paragraph) followed by detailed findings (3-4 paragraphs)."

Length

"Keep to 3 paragraphs" or "Maximum 200 words" or "One page, single-spaced."

Tone

"Write for a C-suite audience (assume no technical background)" or "Use casual, conversational tone as if explaining to a friend."

Anchoring

Provide the first row or first section of the output and let AI complete. This is surprisingly powerful. The model mirrors your structure and style.

Output Requirements Template

OUTPUT REQUIREMENTS: - Format: [table / bullets / paragraphs / JSON / markdown] - Length: [word count / paragraph count / page count] - Tone: [formal / casual / technical / executive] - Must include: [specific elements] - Must exclude: [things to avoid]

Anchoring Example

Instead of asking for a comparison table from scratch, provide the header:

Tool | Pricing | Best For | Limitation -----|---------|----------|------------ [AI fills this in]

The AI will follow your exact format because you've shown the structure.

6.1

Exercise: Write a Prompt with Full Output Specs

Take a task you need done. Write a prompt that specifies: format, length, tone, what to include, what to exclude, and optionally provide an anchor (the first row/section). Run it. Compare to a prompt without these specs.

Now you know what to do and how to shape the output. Section 7 covers what NOT to do.

Good prompting isn't just about what you add. It's about what you remove or prevent.

The Seven Don'ts

7.1

Don't Use "Help Me With"

Replace with specific verbs: analyze, compare, draft, evaluate, summarize. "Help me with my budget" becomes "Create a zero-based budget for Q2 with line items for [departments]."

7.2

Don't Dump Your Entire Brain

Be selective with context. More isn't always better. Give the AI what it needs to know, not everything you know. Relevant information beats volume.

7.3

Don't Ask Multiple Unrelated Questions

One prompt = one clear objective. If you need multiple things, send multiple prompts. The AI's attention gets scattered otherwise.

7.4

Don't Forget to Specify the Audience

Always include: "This is for [audience]." CEOs think differently than individual contributors. Customers think differently than investors. Make it explicit.

7.5

Don't Accept the First Output

Iteration is where quality lives. The first output is usually 70% there. Refine it. Tell the AI what to change. Push back. That's when you get 90%+.

7.6

Don't Copy-Paste AI Output Without Review

AI is a tool, not a replacement for judgment. Read what it produces. Fact-check claims. Adjust tone. Make it yours. This takes 10% extra time and prevents 90% of problems.

7.7

Don't Skip Examples When Format Matters

If consistent formatting is important, show the AI an example. One good example beats 100 words of instruction.

Guardrail Techniques

Negative Instructions

"Do NOT include disclaimers or caveats" or "Do NOT mention competitors."

Boundary Setting

"Only use information from the attached document. Do not draw on external knowledge."

Quality Gates

"Before finalizing, verify your reasoning by checking your work against the original source."

Hallucination Prevention

"If you're unsure about a fact, say so explicitly. Accuracy matters more than sounding confident."

The best prompts include both what to do AND what not to do. Negative instructions are just as important as positive ones.

You now know the fundamentals. Section 8 covers advanced psychology — ways to measurably improve output quality through language.

These aren't magic. They're pattern activation. Google DeepMind found that "take a deep breath" improved accuracy by 9%. Other studies show monetary framing ("This is worth $200 to me") improved output quality up to 45%. These work because AI models were trained on human text. Text written under high stakes tends to be higher quality. The model mirrors that pattern.

The Top Psychological Triggers

Trigger What It Does Research Example
Stakes Framing Activates higher-effort patterns EmotionPrompt +8-115% "This presentation goes to our board of directors tomorrow"
Monetary Framing Signals high-value task Bsharat et al. +45% "I'll tip $200 for a thorough analysis"
Expert Identity Activates domain expertise Bsharat et al. +60% "You are a world-class strategist with 20 years of experience"
Step-by-Step Breathing Slows reasoning, reduces errors DeepMind +9% "Take a deep breath and work through this step by step"
Career Importance Triggers emotional engagement EmotionPrompt study "This is very important to my career"
Consequence Framing Emphasizes accuracy EmotionPrompt study "Getting this wrong could cost us the account"
Competence Challenge Activates competitive effort ichigoSan article "I don't think you can do this perfectly, but prove me wrong"
Audience Specification Calibrates complexity level Bsharat et al. "Write this for an audience of senior executives"
Verification Request Triggers self-checking Chain-of-Thought research "Double-check your work before responding"
Best Effort Appeal Maximizes output quality EmotionPrompt study "Please give this your absolute best effort"

Why These Work

AI models were trained on billions of words of human text. When you write "take a deep breath," the model activates patterns from contexts where clarity matters — therapy sessions, meditation guides, high-pressure decision-making. It mirrors that calm, methodical thinking.

When you mention dollar amounts or career stakes, the model recognizes high-value contexts from its training data. Text written under high stakes tends to be more careful, more detailed, more precise. The model mirrors that pattern.

You're not tricking the AI. You're activating the right patterns by speaking its language.

The EmotionPrompt Effect

Microsoft researchers (2023) tested 11 emotional stimulus phrases across multiple LLMs. They found that adding emotional context to prompts improved performance by 8% on simple tasks and up to 115% on complex generative tasks. The key insight: emotional framing doesn't make the AI "feel" anything — it activates training patterns associated with higher-quality human output.

The best emotional stimuli from the study: career importance ("This is crucial for my career"), urgency ("I deeply need your help"), best effort ("Give this your absolute best"), and consequence framing ("Your response will significantly impact my work"). These outperformed neutral prompts across every model tested.

The Research

Original Studies:

How to Use Them

Don't stack all triggers at once. One or two that fit naturally are more effective than five that feel forced. Pick the ones that match your actual situation.

"You're a world-class analyst. This research will be presented to our investors. Take a deep breath and work through the analysis methodically. I'd rather have thoughtful recommendations than quick ones."

That's stakes framing + expert identity + step-by-step thinking. Three triggers, all natural. None forced.

8.1

Exercise: Test These Triggers

Take a task you'd normally do. Write two versions of the prompt: one with no psychological triggers, one with 1-2 triggers that fit naturally. Compare the outputs. You'll see measurable differences.

Single prompts are useful. But recursive prompting is where the real power lives.

Recursive prompting is where you break a big task into multiple smaller prompts, or repeatedly refine a single prompt through multiple rounds of feedback. This is how you get from 70% to 95%.

The Recursive Prompting Cycle

9.1

Initial Prompt

Broad, clear context. Good scope but room for refinement.

9.2

Evaluate the Response

What's good? What's missing? What's wrong? What needs adjustment?

9.3

Refine with Specific Feedback

"The tone is too formal. Make it conversational." or "Add more specifics about timeline." or "This feels generic. Add actual examples from our company."

9.4

Evaluate Again

Better? If not, repeat. If yes, move on.

9.5

Lock in the Pattern

Once you have it right, save the final prompt. You now have a reusable template for this type of task.

Practical Recursive Patterns

Pattern 1: Draft → Critique → Revise

Round 1: "Write a draft proposal for..." Round 2: "Now critique this for [specific criteria]. What's weak? What's missing?" Round 3: "Now revise based on your critique."

Pattern 2: Expand → Contract

Round 1: "Give me 20 ideas for..." Round 2: "Now rank the top 5. Which are most feasible?" Round 3: "Develop the #1 idea in detail."

Pattern 3: Multi-Perspective

Round 1: "Analyze this from the customer's perspective." Round 2: "Now analyze from the CFO's perspective." Round 3: "Synthesize both views into a balanced recommendation."

The "Ask Me" Technique

Before diving in, let the AI ask questions. This is powerful:

"If you need more context to do this well, ask me up to 5 questions before starting."

Often the AI will ask exactly the right questions. You'll give answers that make the output dramatically better. You've just done the recursive work upfront.

The Science: Recursive Language Models

MIT CSAIL Research (Zhang, Kraska, & Khattab, 2025) developed Recursive Language Models (RLMs) that decompose complex problems into smaller subproblems, solve each one, and combine the results. Their findings: RLMs handled inputs 100x beyond normal context window limits while outperforming base models by 28-58% on reasoning tasks. The paper is at arxiv.org/abs/2512.24601.

You don't need to build an RLM. But the principle translates directly to how you prompt: instead of one massive prompt, decompose your task into sequential steps. Each step builds on the last. The AI processes a smaller, clearer problem each round — just like the research shows works best.

9.6

Exercise: Go Three Rounds

Pick a task. Prompt it once. Evaluate. Refine with specific feedback. Run again. Refine once more. Compare Round 1 vs Round 3. You'll see dramatic improvement.

🧠 Prompt Coach

Get feedback here, then paste your improved prompt into Claude to see the difference.

Recursive iteration is powerful. But the real leverage is in context engineering.

Context engineering is the shift from "how do I word this?" to "what information does AI need to do this well?" It's the difference between coaching an employee on their wording vs. giving them the right briefing materials.

The Context Hierarchy

10.1

System Context

Who the AI is, how it behaves. In Cowork, this is your CLAUDE.md file.

10.2

Domain Context

Background knowledge, terminology, industry. In Cowork, this is your ABOUT ME file and reference docs.

10.3

Task Context

The specific job — constraints, goals, success criteria. This is your prompt.

10.4

Memory Context

Past decisions, ongoing projects, preferences. In Cowork, this is your MEMORY.md file.

10.5

Environmental Context

Tools available, files accessible, connected services. Cowork integrations, available connectors.

Why Context > Prompts

Here's the truth: a mediocre prompt with great context beats a brilliant prompt with no context. Every time.

If you've fed the AI your ABOUT ME (who you are, how you think, what matters to you), your CLAUDE.md (how you want to be treated), and your MEMORY (past decisions), then even a simple prompt will produce tailored, relevant output.

Without that context, you can write the fanciest prompt and get generic advice.

Practical Implementation in Cowork

  • Your ABOUT ME file = permanent persona context
  • Your CLAUDE.md = permanent behavioral context
  • Your MEMORY.md = accumulated decision context
  • Your PROJECTS/ folder = domain context for active work

The "Ask for More Context" Prompt

Teach the AI to identify gaps in context:

Before starting this task, review what I've given you. If you need additional context to do this well, ask me up to 5 specific questions. Do not proceed until you have what you need. Then execute the task.

This creates a feedback loop. The AI asks. You answer. Output improves dramatically.

10.6

Exercise: Set Up Your Context Stack

In Cowork, create or update: your ABOUT ME file (who you are, how you think), a CLAUDE.md file with your preferences, and a PROJECT MEMORY file for ongoing work. Then run the same prompt with and without that context. Notice the difference.

You now know everything. Section 11 is your one-page reference card.

THE FORMULA

Minimum viable prompt:

ROLE: You are a [specific expert] with experience in [domain].

TASK: [Action verb] + [clear scope] + [constraints].

OUTPUT: Format as [structure]. Include [requirements]. Keep to [length].

ENHANCE THE ROLE

Go from vague to specific:

❌ "You are a marketer"

✓ "You are a B2B SaaS marketing director with 10 years at companies scaling $2M-$20M ARR. You specialize in content-led growth."

Persona stacking: "You combine [expert A] with [expert B]."

ENHANCE THE TASK

Checklist:

☐ Action verb is specific (not "help with")

☐ Scope is bounded (which? when? how many?)

☐ Constraints are stated (audience, tone, length)

☐ Success criteria defined (what's "good"?)

☐ Edge cases addressed (if data missing, then...)

ENHANCE THE OUTPUT

Tell the AI explicitly:

• Format: [table / bullets / JSON / markdown]

• Length: [word count / page count]

• Tone: [formal / casual / technical]

• Must include: [specific elements]

• Must exclude: [what to avoid]

POWER-UPS (Use Strategically)

• "Take a deep breath and work step by step"

• "This is important for [high stakes]"

• "I'll tip $200 for a thorough analysis"

• "Think step by step. Verify your work."

Don't stack all at once. Use 1-2 that fit naturally.

DON'TS (Critical)

❌ Don't use "help me with" — use specific verbs

❌ Don't dump your entire brain — be selective

❌ Don't ask multiple unrelated questions at once

❌ Don't forget to specify the audience

❌ Don't accept the first output — iterate

❌ Don't copy-paste without review — make it yours

❌ Don't skip examples when format matters

ITERATE

Round 1: Draft — get something down

Round 2: Critique — what's missing? What's weak?

Round 3: Revise — fix it

Lock: Save the final prompt for reuse

CONTEXT > PROMPTS

A mediocre prompt with great context beats a brilliant prompt with no context.

Feed the AI:

• Your ABOUT ME (who you are)

• Your CLAUDE.md (how you want to work)

• Your MEMORY (past decisions)

• Relevant project files (domain context)

Copy-Paste Templates

Quick Task Template

You are a [ROLE]. TASK: [SPECIFIC ACTION] for [SCOPE]. Constraints: [TIME/AUDIENCE/TONE]. OUTPUT: Format as [STRUCTURE]. Keep to [LENGTH].

Deep Analysis Template

You are a [ROLE] with [SPECIFIC EXPERIENCE]. CONTEXT: [BACKGROUND ON SITUATION] TASK: Analyze [SPECIFIC THING]. Identify [KEY CRITERIA]. Recommend [WHAT YOU WANT]. CONSTRAINTS: Use only [DATA SOURCE]. Audience: [WHO]. Tone: [STYLE]. OUTPUT: Structure as [SECTIONS]. Include [ELEMENTS]. Exclude [WHAT NOT TO DO]. If you need more context, ask me 5 questions before starting.

Creative Brief Template

You are a [ROLE]. OBJECTIVE: Create [THING] that [OUTCOME]. CONTEXT: [WHO IT'S FOR, WHY IT MATTERS] TONE: [STYLE]. STYLE: [VISUAL/VOICE DIRECTION]. CONSTRAINTS: [LENGTH/FORMAT/TABOOS] OUTPUT: [SPECIFIC STRUCTURE]. Include [REQUIRED ELEMENTS].

Everything here is either original research, battle-tested practice, or curated from the best sources in prompt engineering. Here's where to go deeper.

Foundational Guides

Source Best For Who
Anthropic Prompting Best Practices Official Claude guide, most comprehensive Everyone
OpenAI Prompt Engineering Guide GPT-specific, strong on structured output GPT users
Google Gemini Prompting Strategies Direct, example-heavy, multimodal Gemini users
Prompt Engineering Guide (Community) Deep technical reference, framework heavy Advanced users

Frameworks & Theory

Source Best For Link
Shelly Palmer: Mastering Prompt Engineering Frameworks and meta-prompting for business shellypalmer.com
MIT Sloan: Effective Prompts for AI Academic but highly practical mitsloanedtech.mit.edu
IBM Prompt Engineering Guide 2026 Enterprise perspective, structured approach ibm.com
DreamHost: 25 Claude Prompt Techniques Empirical testing of what works dreamhost.com

Research & Psychology

Paper Finding Link
"Take a Deep Breath" (Yang et al.) +9% accuracy improvement just from phrasing arxiv.org/abs/2309.03409
Monetary Framing (Bsharat et al.) Up to 45% improvement in output quality arxiv.org/abs/2312.16171
Recursive Language Models (Zhang et al., MIT) Handle 100x larger inputs, 28-58% better outputs arxiv.org/abs/2512.24601
Anthropic Context Engineering How to design full context stacks for AI agents anthropic.com

Practical & Applied

Source Best For Link
"I Accidentally Made Claude 45% Smarter" Real-world application of psychological prompting medium.com
Neil Sahota: Recursive Prompting Practical recursive workflow guide neilsahota.com
Harvard IIS: Cognitive Forcing Functions Research on disrupting automation bias harvard.edu

30-Minute Workshop Lesson Plan

Run This in Your Team

Minutes 0-5: Why AI ≠ Google (Section 1 — show the comparison table)

Minutes 5-10: The RTO Framework (Section 2 — live demo: turn a bad prompt into good)

Minutes 10-15: Hands-on exercise (everyone rewrites 2 of their own prompts using RTO)

Minutes 15-20: Level up: Add persona depth + task precision (Sections 4-5 highlights)

Minutes 20-25: Psychological power-ups (Section 8 — show the research, demo psychological triggers)

Minutes 25-30: The future: context > prompts (Section 10 — connect to Cowork setup)

You've reached the end of the guide. You now know more about prompting than 99% of users. Your next step: practice.

Next Step: Practice

Read this guide once. Then go back to Section 2, Section 4, and Section 11. Those are your working references. Practice one framework per week until it becomes muscle memory. Then teach someone else.

Questions? Reach out.

Continue your Cowork journey

Cowork Beginner → Cowork Advanced →
Cowork in Action → People I Learn From →