The Art of the Loose Prompt
Most advice says "be specific." That's good advice until it turns into fake precision.
A lot of my best prompts look loose because they're honest about confidence. January 2, 2026: "I'm creating a job manifest for Kai. I want a Bayesian system. That uses joint probabilities. I want it to focus on the data that supports its affordances." That's not polished prompt engineering. It's a directional brief with strong intent and soft edges.
It worked because it carried the right signal: goal, frame, and constraints. The model could fill the formal gaps.
This is the useful middle ground between two failure modes. If you're overly precise and wrong, you force the system down the wrong path. If you're vague, you give it no path at all. "Qd90 or something"-style language can be better than both when it means "I'm close, verify and correct me."
There are two different moves that sound similar:
Calibrated confidence means: I'm giving a likely value, you validate it. Delegation means: I don't care about this choice, you choose.
If you mix those up, errors get expensive. So the skill is not "be loose." The skill is to mark what you know, what you suspect, and what you're intentionally handing off.
This generalizes beyond technical work. When my cousin Alex described what he wanted a website to do, he didn't use developer jargon. He said what he meant in plain language with rough edges. The AI filled the gaps because Alex's intent was clear even when his terminology wasn't. His loose prompts worked for the same reason mine did: strong signal on what matters, honest uncertainty on the rest.
Loose prompts are not lazy prompts. They're compressed prompts with explicit uncertainty.