From Search Bar to Strategic Partner
Stop Googling. Start engineering. A practical guide to getting dramatically better results from AI.
Everyone has access to the same AI. Not everyone gets the same results.
The difference isn't the tool — it's how you talk to it.
Prompting is the highest-leverage skill you can develop right now.
This playbook will take you from typing questions into a search bar to engineering conversations with a strategic partner. Every technique is research-backed, battle-tested, and designed for people who use AI to get real work done.
Google is a lookup tool. Ask it a question, get links to answers. AI is a production tool. Give it a task, get finished work.
| Google Search | Basic AI Prompt | Engineered Prompt |
|---|---|---|
| "best project management tools 2026" | "What are the best project management tools?" | "You are a senior ops consultant. Compare the top 5 project management tools for a 12-person startup. Evaluate on: pricing, integrations, learning curve, and remote team features. Output as a comparison table with a final recommendation." |
AI is a capable new hire on their first day. Brilliant but lacking YOUR context. Your job isn't to ask better questions. Your job is to give better briefs.
Most people waste AI because they search-engine it. "Best email templates." "How do I write a proposal?" These get generic answers. Instead, load the prompt with specificity: role, constraints, examples, output format. The more context you give, the better the work.
Take your last 3 Google searches. For each one, rewrite it as an AI prompt. Start with: "You are a [specific expert]. I need you to [task]. Here's the context: [what I'm doing, who it's for, what success looks like]. Output as [format]."
Get feedback here, then paste your improved prompt into Claude to see the difference.
Ready to learn the core framework? Continue to Section 2.
RTO stands for Role-Task-Output. It's not revolutionary, but it's reliable. Every prompt you write should include these three elements:
When you specify a role, you activate a whole pattern of knowledge in the AI model. The AI was trained on text from countless perspectives. By saying "You are a senior recruiter," you're lighting up that specific region of the model. Then you tell it exactly what to do. Then you tell it how to package the answer. Done.
"Help me with my resume"
Missing: role, specificity, output format. This will generate okay advice but nothing tailored to your situation.
"You are a senior recruiter at a Fortune 500 tech company. Review my resume. Identify the 3 weakest bullet points and rewrite each using the STAR method. Format as a before/after table."
ROLE: Senior recruiter at Fortune 500 tech — this activates expertise-specific patterns
TASK: Review resume, identify 3 weakest points, rewrite using STAR — this is concrete and bounded
OUTPUT: Before/after table — this controls the format and makes comparison easy
Pick 3 tasks you'll do this week. For each one, write a complete RTO prompt. Don't overthink it. Just fill in the three blanks. Get comfortable with the template.
Get feedback here, then paste your improved prompt into Claude to see the difference.
Once you've mastered RTO, the frameworks in Section 3 will make much more sense.
RTO is simple and effective for most tasks. But there are other frameworks designed for specific challenges. Here are the ones that actually move the needle:
| Framework | Structure | Best For |
|---|---|---|
| RTO | Role → Task → Output | Quick tasks, daily use |
| RISEN | Role → Instructions → Steps → End goal → Narrowing | Complex multi-step work |
| COSTAR | Context → Objective → Style → Tone → Audience → Response | Marketing & content |
| RISE | Role → Input → Steps → Expectations | Executive briefings |
| CRAFT | Context → Role → Action → Format → Tone | Strategic communications |
| Chain-of-Thought | "Think step by step" | Analysis & reasoning |
| Tree-of-Thoughts | Explore 3 approaches, compare, pick best | Strategic decisions |
| Few-Shot | Show 3-5 examples of desired output | Consistent formatting |
If you're briefing AI for board-level or investor-facing work, RISE (Role → Input → Steps → Expectations) and CRAFT (Context → Role → Action → Format → Tone) add explicit expectation-setting. Both come from executive prompting playbooks and force you to define what "good" looks like before the AI starts working.
If your task has multiple stages, RISEN gives structure. Role, specific instructions, numbered steps, the end goal, and narrowing criteria to refine the output.
When writing for an audience, COSTAR forces you to think through context (the situation), objective (what you want to happen), style (voice), tone (emotional temperature), and audience (who reads this). The output is tighter because you've removed ambiguity.
Sometimes the best thing to add to a prompt is: "Think step by step." This slows down the AI's reasoning and reduces errors. Try it on any task involving judgment, math, or logic.
Pick a task you need to do. Write it using RTO. Then rewrite it using COSTAR. Run both. Compare the outputs. You'll see why framework choice matters.
Frameworks give you structure. Persona engineering fills in the details.
Good personas aren't vague. They're specific. Here's how to build them:
"You are a marketer"
This is too generic. Marketers vary wildly. You'll get generic output.
"You are a B2B SaaS marketing director"
Better. Now the model knows the domain (B2B SaaS) and the level (director). Output will be more relevant.
"You are a B2B SaaS marketing director with 10 years experience at companies scaling from $2M to $20M ARR. You specialize in content-led growth and have a bias toward data-driven decisions over hunches."
Now you're cooking. The model has a clear picture. Experience range, specialization, decision-making style. Output will be specific and sharp.
You can combine personas. This is powerful for creative work or strategic analysis:
Now the output has both structure and flair. Persona stacking works best with 2-3 combinations. More than that and you'll confuse the model.
Avoid: "You are a helpful assistant." That's the default. It adds zero information. Instead, go specific: "You are a developer with 8 years building infrastructure at Stripe." Now you have signal.
Think of 3 experts you wish you had access to. For each, write a detailed persona: title, years of experience, domain, specialization, decision-making style. These become templates you reuse.
Get feedback here, then paste your improved prompt into Claude to see the difference.
Good personas matter. But they only work if the task is equally well-defined. Section 5 covers task precision.
A great persona with a vague task still produces vague output. You need to define the task clearly.
"Help me with my business strategy"
"Create a go-to-market strategy for our new product"
"Create a 90-day go-to-market strategy for our new feature (AI-powered contract analysis). Our ICP is in-house legal teams at mid-market companies (500-5000 employees). We have no case studies yet. Budget: $50k. Constraints: no enterprise sales team. Output: a phased timeline with specific milestones and success metrics for each phase."
"Help me improve our onboarding"
"Audit our current 7-day onboarding for enterprise software. Identify the top 3 drop-off points. For each, propose a specific intervention. Format as a prioritized list with estimated effort to implement."
Find a prompt you've used before. Rewrite it with: specific action verb, clear scope boundaries, stated constraints, and success criteria. Run it. See the difference.
Get feedback here, then paste your improved prompt into Claude to see the difference.
Good persona + good task = good foundation. Section 6 covers controlling the output format itself.
Even with a good role and task, the AI still has choices about format. Your job is to remove those choices.
"Format as a table with columns for..." or "Structure this as an executive summary (1 paragraph) followed by detailed findings (3-4 paragraphs)."
"Keep to 3 paragraphs" or "Maximum 200 words" or "One page, single-spaced."
"Write for a C-suite audience (assume no technical background)" or "Use casual, conversational tone as if explaining to a friend."
Provide the first row or first section of the output and let AI complete. This is surprisingly powerful. The model mirrors your structure and style.
Instead of asking for a comparison table from scratch, provide the header:
The AI will follow your exact format because you've shown the structure.
Take a task you need done. Write a prompt that specifies: format, length, tone, what to include, what to exclude, and optionally provide an anchor (the first row/section). Run it. Compare to a prompt without these specs.
Now you know what to do and how to shape the output. Section 7 covers what NOT to do.
Good prompting isn't just about what you add. It's about what you remove or prevent.
Replace with specific verbs: analyze, compare, draft, evaluate, summarize. "Help me with my budget" becomes "Create a zero-based budget for Q2 with line items for [departments]."
Be selective with context. More isn't always better. Give the AI what it needs to know, not everything you know. Relevant information beats volume.
One prompt = one clear objective. If you need multiple things, send multiple prompts. The AI's attention gets scattered otherwise.
Always include: "This is for [audience]." CEOs think differently than individual contributors. Customers think differently than investors. Make it explicit.
Iteration is where quality lives. The first output is usually 70% there. Refine it. Tell the AI what to change. Push back. That's when you get 90%+.
AI is a tool, not a replacement for judgment. Read what it produces. Fact-check claims. Adjust tone. Make it yours. This takes 10% extra time and prevents 90% of problems.
If consistent formatting is important, show the AI an example. One good example beats 100 words of instruction.
"Do NOT include disclaimers or caveats" or "Do NOT mention competitors."
"Only use information from the attached document. Do not draw on external knowledge."
"Before finalizing, verify your reasoning by checking your work against the original source."
"If you're unsure about a fact, say so explicitly. Accuracy matters more than sounding confident."
You now know the fundamentals. Section 8 covers advanced psychology — ways to measurably improve output quality through language.
| Trigger | What It Does | Research | Example |
|---|---|---|---|
| Stakes Framing | Activates higher-effort patterns | EmotionPrompt +8-115% | "This presentation goes to our board of directors tomorrow" |
| Monetary Framing | Signals high-value task | Bsharat et al. +45% | "I'll tip $200 for a thorough analysis" |
| Expert Identity | Activates domain expertise | Bsharat et al. +60% | "You are a world-class strategist with 20 years of experience" |
| Step-by-Step Breathing | Slows reasoning, reduces errors | DeepMind +9% | "Take a deep breath and work through this step by step" |
| Career Importance | Triggers emotional engagement | EmotionPrompt study | "This is very important to my career" |
| Consequence Framing | Emphasizes accuracy | EmotionPrompt study | "Getting this wrong could cost us the account" |
| Competence Challenge | Activates competitive effort | ichigoSan article | "I don't think you can do this perfectly, but prove me wrong" |
| Audience Specification | Calibrates complexity level | Bsharat et al. | "Write this for an audience of senior executives" |
| Verification Request | Triggers self-checking | Chain-of-Thought research | "Double-check your work before responding" |
| Best Effort Appeal | Maximizes output quality | EmotionPrompt study | "Please give this your absolute best effort" |
AI models were trained on billions of words of human text. When you write "take a deep breath," the model activates patterns from contexts where clarity matters — therapy sessions, meditation guides, high-pressure decision-making. It mirrors that calm, methodical thinking.
When you mention dollar amounts or career stakes, the model recognizes high-value contexts from its training data. Text written under high stakes tends to be more careful, more detailed, more precise. The model mirrors that pattern.
You're not tricking the AI. You're activating the right patterns by speaking its language.
Microsoft researchers (2023) tested 11 emotional stimulus phrases across multiple LLMs. They found that adding emotional context to prompts improved performance by 8% on simple tasks and up to 115% on complex generative tasks. The key insight: emotional framing doesn't make the AI "feel" anything — it activates training patterns associated with higher-quality human output.
Don't stack all triggers at once. One or two that fit naturally are more effective than five that feel forced. Pick the ones that match your actual situation.
That's stakes framing + expert identity + step-by-step thinking. Three triggers, all natural. None forced.
Take a task you'd normally do. Write two versions of the prompt: one with no psychological triggers, one with 1-2 triggers that fit naturally. Compare the outputs. You'll see measurable differences.
Single prompts are useful. But recursive prompting is where the real power lives.
Recursive prompting is where you break a big task into multiple smaller prompts, or repeatedly refine a single prompt through multiple rounds of feedback. This is how you get from 70% to 95%.
Broad, clear context. Good scope but room for refinement.
What's good? What's missing? What's wrong? What needs adjustment?
"The tone is too formal. Make it conversational." or "Add more specifics about timeline." or "This feels generic. Add actual examples from our company."
Better? If not, repeat. If yes, move on.
Once you have it right, save the final prompt. You now have a reusable template for this type of task.
Round 1: "Write a draft proposal for..." Round 2: "Now critique this for [specific criteria]. What's weak? What's missing?" Round 3: "Now revise based on your critique."
Round 1: "Give me 20 ideas for..." Round 2: "Now rank the top 5. Which are most feasible?" Round 3: "Develop the #1 idea in detail."
Round 1: "Analyze this from the customer's perspective." Round 2: "Now analyze from the CFO's perspective." Round 3: "Synthesize both views into a balanced recommendation."
Before diving in, let the AI ask questions. This is powerful:
Often the AI will ask exactly the right questions. You'll give answers that make the output dramatically better. You've just done the recursive work upfront.
You don't need to build an RLM. But the principle translates directly to how you prompt: instead of one massive prompt, decompose your task into sequential steps. Each step builds on the last. The AI processes a smaller, clearer problem each round — just like the research shows works best.
Pick a task. Prompt it once. Evaluate. Refine with specific feedback. Run again. Refine once more. Compare Round 1 vs Round 3. You'll see dramatic improvement.
Get feedback here, then paste your improved prompt into Claude to see the difference.
Recursive iteration is powerful. But the real leverage is in context engineering.
Who the AI is, how it behaves. In Cowork, this is your CLAUDE.md file.
Background knowledge, terminology, industry. In Cowork, this is your ABOUT ME file and reference docs.
The specific job — constraints, goals, success criteria. This is your prompt.
Past decisions, ongoing projects, preferences. In Cowork, this is your MEMORY.md file.
Tools available, files accessible, connected services. Cowork integrations, available connectors.
Here's the truth: a mediocre prompt with great context beats a brilliant prompt with no context. Every time.
If you've fed the AI your ABOUT ME (who you are, how you think, what matters to you), your CLAUDE.md (how you want to be treated), and your MEMORY (past decisions), then even a simple prompt will produce tailored, relevant output.
Without that context, you can write the fanciest prompt and get generic advice.
Teach the AI to identify gaps in context:
This creates a feedback loop. The AI asks. You answer. Output improves dramatically.
In Cowork, create or update: your ABOUT ME file (who you are, how you think), a CLAUDE.md file with your preferences, and a PROJECT MEMORY file for ongoing work. Then run the same prompt with and without that context. Notice the difference.
You now know everything. Section 11 is your one-page reference card.
Minimum viable prompt:
ROLE: You are a [specific expert] with experience in [domain].
TASK: [Action verb] + [clear scope] + [constraints].
OUTPUT: Format as [structure]. Include [requirements]. Keep to [length].
Go from vague to specific:
❌ "You are a marketer"
✓ "You are a B2B SaaS marketing director with 10 years at companies scaling $2M-$20M ARR. You specialize in content-led growth."
Persona stacking: "You combine [expert A] with [expert B]."
Checklist:
☐ Action verb is specific (not "help with")
☐ Scope is bounded (which? when? how many?)
☐ Constraints are stated (audience, tone, length)
☐ Success criteria defined (what's "good"?)
☐ Edge cases addressed (if data missing, then...)
Tell the AI explicitly:
• Format: [table / bullets / JSON / markdown]
• Length: [word count / page count]
• Tone: [formal / casual / technical]
• Must include: [specific elements]
• Must exclude: [what to avoid]
• "Take a deep breath and work step by step"
• "This is important for [high stakes]"
• "I'll tip $200 for a thorough analysis"
• "Think step by step. Verify your work."
Don't stack all at once. Use 1-2 that fit naturally.
❌ Don't use "help me with" — use specific verbs
❌ Don't dump your entire brain — be selective
❌ Don't ask multiple unrelated questions at once
❌ Don't forget to specify the audience
❌ Don't accept the first output — iterate
❌ Don't copy-paste without review — make it yours
❌ Don't skip examples when format matters
Round 1: Draft — get something down
Round 2: Critique — what's missing? What's weak?
Round 3: Revise — fix it
Lock: Save the final prompt for reuse
A mediocre prompt with great context beats a brilliant prompt with no context.
Feed the AI:
• Your ABOUT ME (who you are)
• Your CLAUDE.md (how you want to work)
• Your MEMORY (past decisions)
• Relevant project files (domain context)
Everything here is either original research, battle-tested practice, or curated from the best sources in prompt engineering. Here's where to go deeper.
| Source | Best For | Who |
|---|---|---|
| Anthropic Prompting Best Practices | Official Claude guide, most comprehensive | Everyone |
| OpenAI Prompt Engineering Guide | GPT-specific, strong on structured output | GPT users |
| Google Gemini Prompting Strategies | Direct, example-heavy, multimodal | Gemini users |
| Prompt Engineering Guide (Community) | Deep technical reference, framework heavy | Advanced users |
| Source | Best For | Link |
|---|---|---|
| Shelly Palmer: Mastering Prompt Engineering | Frameworks and meta-prompting for business | shellypalmer.com |
| MIT Sloan: Effective Prompts for AI | Academic but highly practical | mitsloanedtech.mit.edu |
| IBM Prompt Engineering Guide 2026 | Enterprise perspective, structured approach | ibm.com |
| DreamHost: 25 Claude Prompt Techniques | Empirical testing of what works | dreamhost.com |
| Paper | Finding | Link |
|---|---|---|
| "Take a Deep Breath" (Yang et al.) | +9% accuracy improvement just from phrasing | arxiv.org/abs/2309.03409 |
| Monetary Framing (Bsharat et al.) | Up to 45% improvement in output quality | arxiv.org/abs/2312.16171 |
| Recursive Language Models (Zhang et al., MIT) | Handle 100x larger inputs, 28-58% better outputs | arxiv.org/abs/2512.24601 |
| Anthropic Context Engineering | How to design full context stacks for AI agents | anthropic.com |
| Source | Best For | Link |
|---|---|---|
| "I Accidentally Made Claude 45% Smarter" | Real-world application of psychological prompting | medium.com |
| Neil Sahota: Recursive Prompting | Practical recursive workflow guide | neilsahota.com |
| Harvard IIS: Cognitive Forcing Functions | Research on disrupting automation bias | harvard.edu |
Minutes 0-5: Why AI ≠ Google (Section 1 — show the comparison table)
Minutes 5-10: The RTO Framework (Section 2 — live demo: turn a bad prompt into good)
Minutes 10-15: Hands-on exercise (everyone rewrites 2 of their own prompts using RTO)
Minutes 15-20: Level up: Add persona depth + task precision (Sections 4-5 highlights)
Minutes 20-25: Psychological power-ups (Section 8 — show the research, demo psychological triggers)
Minutes 25-30: The future: context > prompts (Section 10 — connect to Cowork setup)
You've reached the end of the guide. You now know more about prompting than 99% of users. Your next step: practice.
Read this guide once. Then go back to Section 2, Section 4, and Section 11. Those are your working references. Practice one framework per week until it becomes muscle memory. Then teach someone else.
Questions? Reach out.
Continue your Cowork journey