The Mentor's Mirror
There's a well-known technique in programming called rubber duck debugging. You explain your problem to a rubber duck — out loud, step by step — and the act of explaining often reveals the solution. The duck doesn't do anything. The explaining does the work.
Mentoring someone in AI does the same thing, but better, because the duck talks back.
When you teach someone how to use AI — a cousin learning to build websites, a friend figuring out how to organize files, a colleague trying to understand what agents can do — you're forced to articulate things you've been doing on instinct. Why did you phrase that prompt that way? Why did you choose this tool over that one? What's your mental model of what the AI is good at? You don't know, exactly, until someone asks. And then you have to figure it out in real time, out loud, and the figuring out is the learning.
This is the mentor's secret: teaching makes you better, not just the student. Every question a beginner asks is a question you haven't explicitly answered for yourself. "Why did the AI do that?" forces you to build a mental model of the system's behavior. "How do I know when to trust it?" forces you to articulate your own trust heuristics. "What should I try next?" forces you to examine your own decision-making process.
And it does something else: it shows you how people understand AI. Not how you understand AI — you're already past the beginner stage and you've forgotten what it looked like. But a mentee shows you the gaps in real time. They're surprised by things you take for granted. They're confused by things you think are obvious. They attempt approaches you'd never consider, and sometimes those approaches work better than yours. The beginner's perspective is data you can't get any other way.
One person heard "fix your papercuts" and went home and used an AI tool to rename his credit card PDFs — read each one, extract the bank name, rename them consistently. He'd tolerated that friction for years. The lesson transferred, but watching how it transferred — what clicked, what needed repeating, what metaphor made it land — that's information the mentor gets for free. It's a feedback loop: you teach a pattern, you watch it propagate, you learn how the pattern actually works by seeing someone else apply it.
Alex — who had no programming background — went from zero to building websites with AI in a matter of days after a few mentoring sessions. He built a site for a local business and a character engine for his D&D campaign. But the mirror moment wasn't watching him succeed. It was hearing him describe a problem I hadn't thought of, using a framework I'd taught him, and arriving at a question I couldn't answer. A character engine became a lesson in structure. A broken script became a lesson in verification. And every time he hit a wall I hadn't anticipated, I had to figure out — out loud — whether my advice was a principle or just a habit.
That's what the mirror shows you: a pattern you taught became a door someone else walked through, and now they're in territory you haven't mapped. You discover which parts of your workflow are principles and which parts are muscle memory that doesn't transfer. You stop saying "I just do it this way" and start saying "here's the pattern, and here's when it breaks."
The heuristic for anyone in the AI space: find someone to mentor. Not because they need you — they'll figure it out eventually with or without you. Because you need the mirror. The act of explaining your intuitions is the fastest way to turn them into transferable knowledge, and transferable knowledge is what separates someone who uses AI from someone who understands it.