I once had a teacher who let us use a "cheating sheet" on tests. One index card, both sides, write whatever you want. The constraint was the size — it had to be small.
The trick was: making the sheet was the studying. You had to decide what mattered. You had to compress a semester into a few square inches. By the time you sat down for the test, you didn't need the card. The act of writing it had already done the work.
I never stopped making cheating sheets. I just stopped calling them that.
I'm confused about something. I write it down. The writing is the understanding. The artifact — the file, the doc, the note — isn't the point. It's the residue of having thought it through.
This is the same move every time, across every domain:
I didn't understand my own symptoms. So I tracked them — inflammation, medication, sleep, exercise, cheese — in a Google Doc. "What is this thing that is triggering these episodes of pain?" The doc didn't cure anything. Writing it down made the patterns visible. Cheese was the answer.
I didn't know tile, electrical, plumbing, or countertops. So I wrote texts to contractors full of questions, kept a running doc of every quote and measurement, and had ChatGPT conversations about floor leveling. I documented my way through a remodel.
I didn't know bridge. I told an AI agent what confused me. Each confusion became a feature. "What's a trick?" became the trick-tracking UI. "Why 1NT?" became the bid explanation panel. I documented my way into knowing bridge — and ended up with a working game.
I didn't know Go well enough. So I wrote a cheat sheet — types, data structures, syntax. I documented my way into a language.
I didn't know how to explain what I'd learned about AI. I wrote 48 short observations. Each one was me working out what I thought by writing it down. The book is the cheating sheet for two years of building.
The pattern is ancient. The cheating sheet predates AI. It predates computers. It's an index card in your pocket.
What changed is that now the documentation talks back.
The personal journal: you write "I'm feeling confused today," and the entry sits there. Maybe you notice the pattern months later. Maybe you don't.
The slush pile: you say "make a note about translation being free," and within minutes it's expanded, connected to three other ideas, and sitting on a page where the next agent can read it.
The cheating sheet: you write gh repo create reponame --private so you don't have to look it up again. Now put that file where an agent can read it, and "how do I do that rclone thing again?" stops being a question you ask the sheet. It's a question the sheet answers for you.
Scale that up past a single file and the whole corpus starts participating: guides, transcripts, dev logs, and slush notes all improving the next run. That's when the flywheel starts following the action.
ChatGPT in a browser is a head in a box — smart, but no hands. It can think with you, but you do all the work. You copy-paste its output and act on it yourself.
An agent with file access is an octopus — it can read, write, search, build, and fix. You say "fix this" and it fixes it. The intelligence has reach.
The gap between those two is smaller than you'd think:
• A folder
• Some files
• An agent that can read and write them
That's it. No database, no cloud platform, no deployment pipeline. Just a directory and an LLM with permission to act. From that, you get the chatbot, the flywheel, the daily briefing, the wall of data — all of it is just folders and files with an agent that has permission to touch them.
Every guide on this site is a cheating sheet for a specific kind of confusion. The board game guide is for "I want to build a game but I don't know how." The chatbot guide is for "I have messy knowledge and I want to make it useful." The wall of data is for "my life is scattered across 40 services and I can't find anything."
You read them or you paste them into an agent. Either way, they work the same way the index card worked: the constraint is the size. The act of compressing the problem into a guide is what makes the guide work. And now the guide can act on what it knows.
This is why the site can function as a curriculum. The mentoring move is not "learn more about AI." It's "cross from head in a box to octopus in a box" — from reading answers to making things with an agent that has hands.
I'm a simple soul. I document my way out of confusion. Everything on this site is a version of that.