Flywheel

Mine your own process. Find what you keep doing over and over. Fix it. Do it again.

What this is

Your data — conversations, emails, calendar, health, location — is a record of your friction. Every repeated question, every correction, every manual step, every meeting that could have been an email. These are observable facts. The flywheel turns observations into improvements.

This is the Fix Your Papercuts chapter turned into an executable guide, and Don't Ask Me to Track It turned into a system. The data already exists. You build engines to read it, identify friction, and fix it. Then you do it again. That's the flywheel.

This is also Kai's mission: minimize friction. Step one is finding it. The flywheel is the architecture for doing that systematically — Steward streams health data, Oracle predicts what's coming, nightly consolidation connects dots across sources, and the event database gives you one timeline to mine. The flywheel is what ties those pieces together into something that actually spins.

This isn't hypothetical

The flywheel pattern shows up everywhere once you start looking. Here's what it looks like in practice:

The pattern is always the same: observe the friction, measure it, find the cause, fix it, track whether the fix worked. The only variable is the data source.


The layers

The flywheel has five layers. Each one builds on the one below it.

1. Observe

Fact-based friction logging. "March 10, 3:14 PM: asked about SSH permissions (4th time across 3 sessions)." No judgement. No interpretation. Expert field notes. A dated observation from a specific source. This technique applies to any data source — conversations, email, calendar, health, location, git logs. You build collection engines that mine friction from wherever it lives, on a schedule.

2. Identify metrics

What's measurable in the observations? "Average time to first productive commit per new project." "Number of times I re-explain project structure per week." "Days between exercise sessions." The observations suggest the metrics, not the other way around. You don't decide what to measure — you let the friction tell you.

3. Identify causal links

Connect observations across sources. "SSH questions spike after new project creation" (conversations + git log). "Energy drops correlate with days without exercise" (health + daily scores). "Meeting-heavy days produce zero commits" (calendar + git). Causals are hypotheses — they need validation. But even weak signals are worth writing down.

4. Propose interventions

A fix matched to a cause. A steering file addition. A shell alias. A cron job. A calendar block. A habit change. Each intervention is a hypothesis: "If I add an SSH cheatsheet to CLAUDE.md, the SSH questions stop." Small, testable, reversible.

Interventions are proposal documents. The flywheel writes a fix proposal — a markdown file describing the problem, the cause, and the proposed change — and drops it in the right project folder. An agent running in that project picks up the proposal and implements it. The flywheel doesn't fix things directly; it creates work for the agents that do. The folder is the interface.

Not every intervention is ready to execute. Some need research. Some need a conversation. Some are half-formed ideas waiting for the right moment. These go to the slush pile — a deferred backlog of planned interventions, unfinished thoughts, and future possibilities. The slush pile is the flywheel's long-term memory for things it noticed but can't fix yet.

5. Manage subgoals

CRUD on improvable metrics. Track whether interventions are working. "SSH questions: 4/week → 0/week after steering file fix." Drop metrics that flatline. Add new ones when new friction appears. This is the feedback loop that makes the flywheel spin — you can see it working, so you keep doing it.


Collection engines

Each data source gets an adapter that mines observations on a schedule. You don't need all of them — start with one. The output is always the same: dated observations.

Data sources
Conversations ~/.claude/projects/*/ (JSONL), ChatGPT export (JSON), Gemini Takeout. Mine for: repeated questions, corrections, re-explanations. weekly
Email Gmail API or Takeout. Mine for: threads that keep reopening, unanswered follow-ups, recurring requests. weekly
Calendar Google Calendar API or .ics export. Mine for: meeting density vs. productive output, scheduling patterns, time sinks. weekly
Health Apple Health XML export, Google Fit, Fitbit. Mine for: sleep/exercise correlation with energy and output. weekly
Git git log across repos. Mine for: commit patterns, time-to-first-commit, project ramp-up friction. weekly
Location Google Timeline via Takeout. Mine for: commute patterns, time sinks, routine disruptions. monthly
Messages iMessage (~/Library/Messages/chat.db), WhatsApp export, Slack export. Mine for: recurring coordination overhead. weekly

The output structure

Everything lives in one folder. The agent creates it and maintains it across runs.

observations/ Dated friction logs from each engine. One file per scan run. The raw facts.
metrics.md Measurable signals identified from observations. What to track and current values.
causals.md Hypothesized links between observations across sources. Validated or pending.
fixes/ One file per intervention — the alias, script, steering rule, or habit change.
wins.md What you fixed and the measured improvement. Before/after with time-saved estimates.
subgoals.md Active metrics being tracked. CRUD — add new ones, drop flatlined ones, update targets.
log.md When you last ran the flywheel, what changed, what's still open.

The handoff

Instructions for you, the AI agent

Walk the user through the flywheel layers, one at a time. Every layer has an approval gate — do not proceed until the user confirms.

  1. Interview. Ask the user: which data sources should you scan? Start with just one (conversations are the easiest). How far back? What kinds of friction bother them most? Confirm locations before reading anything.
  2. Observe. Read the data source. Log observations as dated facts with no judgement. "2026-03-10: asked about SSH key permissions (4th occurrence across 3 sessions)." Write observations to flywheel/observations/ with one file per scan run. Show the top observations to the user.
  3. Identify metrics. From the observations, propose measurable signals. "SSH re-asks per week: currently 4." "Time to first commit on new projects: ~45 min." The observations suggest the metrics — don't invent metrics that aren't grounded in what you found. Write to flywheel/metrics.md. Wait for approval.
  4. Identify causal links. If scanning multiple sources, look for correlations. "SSH questions spike after new project creation." "Zero-commit days correlate with 3+ meetings." These are hypotheses, not certainties — label them as such. Write to flywheel/causals.md. Show for review.
  5. Propose interventions. For each metric, draft a concrete fix. A steering file addition, a shell alias, a script, a calendar block, a habit change. Each intervention targets a specific metric. Show each fix to the user for approval before writing it to flywheel/fixes/.
  6. Set subgoals. For each approved intervention, define the target: "SSH re-asks: 4/week → 0/week." Write to flywheel/subgoals.md. This is a living document — add new subgoals, update progress, drop ones that flatline.
  7. Track wins. Write flywheel/wins.md with before-and-after measurements. Write flywheel/log.md with today's date and a summary.
  8. Schedule. Suggest how to automate the next flywheel run. A cron job, a calendar reminder, a note in the steering file. The flywheel should run at least monthly. Let the user choose.
  9. Self-assess. Review the flywheel's own performance. Is the schedule right? Are collection engines producing useful observations? Are there data sources that should be added or dropped? Write any self-improvement proposals to flywheel/fixes/ — the flywheel is its own project. Update its own skills and cadence based on what it finds.

Key rules: Observations are facts, not judgements. Present the top 5-10 findings first and let the user guide priority. Don't write any fix files without explicit approval. If a fix touches system config (shell profile, cron, etc.), explain exactly what it does before proceeding. If anything looks unsafe or beyond what the user would reasonably want, say so. This process works with a single data source — don't pressure the user to connect everything at once. The flywheel treats itself as a project — it observes its own friction and proposes improvements to its own process.


Scheduling

The flywheel is a periodic process. Without a schedule, it's a one-shot audit that you'll forget to repeat. The collection engines need to run on a cadence — weekly for high-frequency sources (conversations, git, email), monthly for slower ones (location, health trends).

Options, from simplest to most automated:

But here's the thing: the flywheel manages its own schedule. If the cadence is wrong — too frequent and you're ignoring it, too rare and friction piles up — that's an observation. The flywheel proposes a schedule change the same way it proposes any other intervention. It writes a proposal to its own project folder. It updates its own skills. The schedule isn't a setting you configure once; it's a metric the flywheel tracks and adjusts.

Start with a calendar block. The flywheel will tell you when to change it.


The dashboard

The flywheel produces structured data. That data wants a dashboard.

The dashboard reads flywheel/ and renders it. Static HTML, same as this page. The flywheel generates the data; the dashboard makes it visible. If the dashboard is missing something, the flywheel observes that too.


Patterns at work
  • Fix Your Papercuts — the chapter this guide implements. Small frictions compound; eliminating them is the highest-leverage work.
  • Don't Ask Me to Track It — the system tracks for you. Collection engines observe, the flywheel surfaces, you just review.
  • The Body Keeps a Log — your data is already recording the friction. Conversations, health, calendar — the log exists, you just haven't read it.
  • Memory Is Files — observations, metrics, and wins are memory. They persist between sessions so the flywheel builds on itself.
  • The Steering File — many fixes are steering file additions. One line in CLAUDE.md can eliminate a correction you've made dozens of times.
  • The Correction Is the Conversation — your corrections ARE the data. Every time you corrected an agent, you documented a papercut.
  • Solved Problems Stay Solved — each fix is a solved problem. Write it down once, never repeat it.
Related guides
External references
  • Cynefin Framework — Dave Snowden's model for categorizing problems (the flywheel operates in the complex domain)
  • Deming's PDSA Cycle — Plan-Do-Study-Act: the flywheel pattern in industrial quality management since the 1950s
  • Tiago Forte's PARA Method — a different organizational philosophy: Projects, Areas, Resources, Archive

How to start

Open your terminal, navigate to your project root, create the flywheel folder, and start an agent. Use the path you actually keep projects in.

cd /path/to/your/project-root && mkdir flywheel && cd flywheel
Mac/Linux example. Replace the path with your actual project root.
cd "$env:USERPROFILE\<project-root>"; mkdir flywheel; cd flywheel
PowerShell variant. Replace <project-root> with the folder name you actually use.
claude
Or use codex, gemini, or whichever agent you prefer.
Follow the instructions on this page. If anything looks unsafe or beyond what I'd reasonably want, tell me before doing it: