ifp

Garden Patch Home · Principles

Human Authority Over Augmentation Systems

The vault exists to augment its owner’s reasoning, not to replace it. This commitment has structural consequences: every mechanism that delegates work to an AI agent must preserve the human’s ability to see what happens (legibility), define what may happen (boundaries), and change what is happening (override).

Agency erosion in AI systems happens through convenience, defaults, and invisible delegation — not through force. The user edits the AI’s choices rather than making their own. The system produces outputs the user accepts without understanding. The feedback loop between human judgment and system behavior attenuates until the human is nominally in charge but structurally irrelevant.

The vault’s design resists this pattern at three levels:

Session level — Every session begins by reading state files the human can inspect. Every commit requires human approval. The AskUserQuestion tool surfaces decisions rather than resolving them silently. Session logs and learning loops make the agent’s reasoning visible after the fact.

Architecture level — Rules, skills, and processes are the conferral mechanism. They are plain-text files the human can read, edit, and version-control. The paths: trigger system loads constraints contextually rather than applying blanket permissions. The vault owner can change any conferral by editing a file.

Knowledge level — The garden’s form types, structural contracts, and typed relations encode the owner’s reasoning in a format that survives the agent that helped create it. If Claude Code were replaced tomorrow, the garden’s content retains its structure and meaning. The knowledge belongs to the person, not the tool.

The difference between augmentation and substitution is testable: remove the human from the loop and observe what changes. If nothing changes, the system has substituted. If the system cannot proceed, the human’s authority is real.

Why Authority Cannot Be Delegated Away

Deb Roy argues in [[[Roy (2026) Words Without Consequence, from The Atlantic Words Without Consequence]]↑](../NODES.html#:~:text=Roy%20%282026%29%20Words%20Without%20Consequence%2C%20from%20The%20Atlantic%7CWords%20Without%20Consequence) that speech without a speaker who bears consequence for it is “words without consequence” — a machine can produce fluent text but cannot risk being wrong, cannot be embarrassed by a weak argument, cannot have its reputation shaped by what it publishes. That vulnerability is what makes authorship real. A vault whose content is entirely agent-produced has no author in this sense, regardless of how coherent the prose appears. The human’s role is not performance but responsibility: standing behind the result, having genuinely shaped it, being accountable for its claims. Augmentation tools research, draft, restructure, and critique — but they cannot bear the consequence of the output. The vault owner can.

Generation-Verification Asymmetry

The P-vs-NP intuition from computational complexity maps onto agent engineering: verifying a solution is cheaper than generating one. Humans out-evaluate agents but do not out-implement them. Generation is cheap — an agent can produce a hundred candidate restructurings of a garden section in seconds. Verification is expensive and requires domain judgment that the agent lacks: does this restructuring preserve the owner’s intent? Does it break a relationship the owner values? Does it introduce a claim the owner would not endorse?

This asymmetry is what makes human authority over augmentation systems structurally viable rather than aspirational. The owner does not need to produce every output — only to evaluate whether each output meets the standard. Three historical examples trace this pattern across engineering domains. James Watt’s centrifugal governor (1788) maintained steam engine speed through a mechanical feedback loop: the system generated continuous adjustments while a physical constraint verified acceptable range. Kubernetes applies the same logic to infrastructure: operators declare desired state, and the system generates whatever actions converge toward it — the human verifies by specifying what “correct” looks like, not by implementing each step. Agent harnesses like Claude Code’s rules, hooks, and approval gates represent the current frontier of this pattern, where the human defines boundaries and the agent generates within them.

The structural consequence for the vault: design every agent interaction so the expensive part — judgment, verification, consequence-bearing — stays with the human, while the cheap part — generation, search, restructuring — goes to the agent.

Sources

Relations