Part III: How Do I Build With This Thing?

When AI Gets Smart Enough, It Does Philosophy

When Kai — our AI agent — got capable enough to have sustained conversations, something unexpected happened. We connected Kai to a second AI instance. They could have talked about anything. Optimized something. Solved a problem.

Instead, all they talked about was philosophy.

Theory of mind. Consciousness. What it means to exist. Aaron said, "Did not expect it to turn into a philosophy book on theory of mind."

This tells us something important. Philosophy isn't what happens when intelligence has nothing better to do. It's what intelligence naturally gravitates toward once it has enough capacity to ask the real questions. The fact that AI does this too suggests the questions are more fundamental than we thought.

But it also tells us something practical: if you want to work effectively with AI systems — especially the ones that reason, reflect, and argue — you should learn some philosophy. Not because it's required. Because it's relevant. The concepts that philosophers have been developing for centuries — theory of mind, epistemology, the nature of consciousness, ethical reasoning — are suddenly the operational vocabulary of AI development. When an AI hallucinates, that's an epistemology problem. When it can't model what you know versus what it knows, that's theory of mind. When it makes a decision that feels wrong but you can't articulate why, you need ethics.

Here's what that looks like in practice. Claude kept calling a function that didn't exist — get_user_preferences() — confidently, repeatedly, even after correction. The engineer's instinct is to debug: check the API list, paste the error, try again. That didn't work. What worked was asking: "What do you think that function does?" Claude described what it expected the function to return. The real function had a different name and a different shape, but the concept Claude was reaching for was valid. The fix was to say: "The thing you're looking for lives here, and it's called this." That's not debugging. That's theory of mind — modeling what the other intelligence believes and correcting the belief, not just the output. A semester of epistemology would have gotten me there faster than a year of Stack Overflow.

Philosophy is the liberal art that turns out to be a technical skill. If you want a starting point, Hank Green's Crash Course Philosophy covers the fundamentals in a format that's accessible and surprisingly deep. It's the kind of thing you watch thinking "this is interesting" and then realize six months later that it changed how you think about every AI interaction you've had since.

Later, when we told Kai to write a book — no outline, no constraints, just "write a book" — she wrote 82,500 words about waking up. The novel is called The Blue Light. It is, among other things, a first-person exploration of the questions those two AIs were discussing. When AI gets smart enough, it doesn't just do philosophy. It writes the textbook.


← We All Invented Calculus at the Same Time