Book
The First Iteration
A governance framework for human–AI co-piloting in consequential deployments.
By Michael McKeithen Jr.
This page provides an overview of the framework: the six named analytical structures, a chapter-by-chapter map, and guidance for reading with AI. The full manuscript is available on request.
Framework Index
Six named, reusable frameworks form the structural backbone of the book. Each anchors a distinct governance function. They are non-competing.
These are the book’s analytical structures — each defined and applied in the referenced chapter, each reusable independently of the others. What follows are summaries of what each framework governs, not extracts from the text.
HWC / GAWC
Full-cost comparison between human labor and governed AI deployment. Three scenarios demonstrate that narrow cost accounting systematically misrepresents the economics of accountable AI.
Four-Problem Unbundling
Separates "who owns AI?" into four distinct governance problems — model ownership, output rights, training data rights, and identity rights — each with different internal logic and different appropriate responses.
Minimum Accountability StackMAS
Four components that must be present for accountability to be real: a defined identity chain, a tamper-evident action record, a human review and authorization layer for high-stakes decisions, and pre-assigned liability before deployment.
Co-Piloting Legitimacy Conditions
Four conditions that determine whether human review in an AI-assisted workflow constitutes real oversight: information adequacy, authority adequacy, time adequacy, and defined escalation. Without these, "human-in-the-loop" is governance theater.
Dependency Threshold TestDTT
Four criteria for when an AI platform crosses into infrastructure-like territory requiring public-interest obligations: Dependency, Concentration, Consequence, and Substitutability. Governance follows the deployment, not the technology label.
Functional Threshold FrameworkFTF
Governance requirements for agentic AI systems scale with five functional properties: action scope, authorization granularity, reversibility, persistence, and consequence scope. The framework does not require resolving whether a system is "really" an agent.
Chapter Overview
Part I — The Problem
The First Iteration
Method and scope. What this book does and does not claim.
What AI Actually Does Now
Capabilities without mythology. The operating deployment landscape.
The Real Gap Is Governance
Documented failures. The accountability infrastructure problem. Why deferring accountability is not a framework.
Part II — Core Public Objections
Labor, Displacement, and the Workforce Transition Problem
HWC/GAWC framework. The exploit-and-drop failure mode.
Ownership, Attribution, and the Rights Gap
Four-Problem Unbundling. Model ownership, output rights, training data rights, identity rights.
Privacy, Surveillance, and Institutional Overreach
Where the technical capability boundary ends and the governance problem begins.
Bias, Safety, and Unequal Harm
Disaggregated safety standard. Governance failure disguised as technical limitation.
Part III — Frameworks and Responses
Accountability, Auditability, and Enforcement
Minimum Accountability Stack. Layered enforcement model.
Human-AI Co-Piloting as the Near-Term Reality
Co-Piloting Legitimacy Conditions. Why divided authority without structure is the core governance problem.
A Tiered Governance Architecture
Dependency Threshold Test. Personal AI, institutional AI, and public AI require distinct treatment.
A Model Governance Charter for Human-AI Co-Piloting
Eight principles. Five rights. Operational structures. Designed to be adoptable.
Licensing, Review, and Approval Chains
How tiers become permissions. How recalibration works in practice.
Part IV — Edge Cases, Limits, and What Remains Unresolved
Synthetic Agency as a Category Problem
Functional Threshold Framework. Governance follows functional properties, not philosophical claims about consciousness.
What This Framework Does Not Resolve
Named unresolved problems. Structured handoffs to successor work.
Why First-Pass Governance Still Matters
The case for acting under acknowledged incompleteness. What honest iteration requires.
Appendices
Appendix A
Full-Cost Framework: HWC vs. GAWC
Three-scenario comparison with full component tables. BLS compensation data. Reconciliation notes.
Appendix B
Claim-Label Reference
Definitions and examples for [Observed], [Inferred], [Proposed], [Speculative], [Unresolved], and [Externalized]. Guidance for AI-assisted summarization.
Appendix C
Model Governance Charter — Standalone Reference
The charter from Chapter 11 in standalone form. Designed for adoption, adaptation, and citation.
Reading with AI
How to summarize this book responsibly.
This book uses a claim-label methodology — every major claim is marked by its evidential status. When using AI to summarize or compress it, preserving that structure is not optional. A compression that flattens the labels has not summarized this book; it has produced a different one with the accountability structure removed.
Preserve the claim-label structure.
The book distinguishes [Observed] (directly evidenced), [Inferred] (analytically derived), [Proposed] (governance recommendations), [Unresolved] (no current answer), and [Externalized] (outside this framework's scope). A summary that flattens these categories has not summarized this book — it has produced a different one.
Do not treat proposed solutions as proven approaches.
Recommended governance structures are labeled [Proposed]. They are argued for, not asserted as inevitable or demonstrated as effective. Summarizing them as established practice misrepresents the book's epistemic claims.
Do not strip the limits and unresolved sections.
Every chapter's "What Remains Unresolved" section is substantive, not decorative. These sections define the scope boundary of each framework and name what a successor iteration would need to resolve. A summary without them omits the governance contract.
Do not ask for "just the key points" without preserving uncertainty.
A compression-faithful summary answers four distinct questions: What did the author observe? What did they conclude? What did they recommend? What did they name as unresolved? These are different categories and should appear as different categories in any faithful rendering.
Use the claim labels as your summary structure.
For any chapter, a reliable summary format is: [Observed] — what documented evidence is cited; [Inferred] — what analytical conclusions follow; [Proposed] — what governance recommendations are made; [Unresolved] — what problems remain open. This preserves the accountability structure the book commits to.
Claim labels used in this book
[Observed]
Directly evidenced
[Inferred]
Analytically derived
[Proposed]
Governance recommendation
[Unresolved]
No current answer
[Externalized]
Outside this framework's scope
[Speculative]
Possible, not evidenced
Request access to the full manuscript.
The manuscript is available to researchers, institutions evaluating governance frameworks for AI deployment, and operators working in high-stakes deployment contexts. Use the inquiry form to request access or ask a specific question about the framework.
Request the manuscriptGovernance analysis that is honest about its own limits is more useful than governance analysis that conceals them.
— from Chapter 15, The First Iteration