Methodology

Our Approach

Measurement-first. Empirically grounded. Built for the realities of government technology environments.

Why Measurement Comes First

The governance challenges facing government AI programs are not primarily political or ethical — they are empirical. Agencies need to know: Is this working? Are we better off? Are risks under control? These are measurement questions. Until an organization has credible, consistent measurements of AI performance and risk, governance frameworks remain decorative.

ThinkCapital's analytical approach was forged during 16 years of enterprise IT assessment work at Software Productivity Research, where the discipline of measurement — function point analysis, defect density tracking, productivity benchmarking — was applied to problems that had previously been addressed only by intuition and assertion. We bring that same discipline to AI governance.

How We Structure Our Work

Baseline Before Governance

Effective AI governance requires a measurable baseline. We help agencies establish the metrics infrastructure needed to track AI system performance, drift, and risk indicators over time.

Threshold Adoption Modeling

Drawing on Schelling-Granovetter social threshold models, we analyze why AI adoption cascades in some organizations and stalls in others — identifying the leverage points that matter.

Maturity Staging

AI governance maturity is not binary. We apply staged maturity models — informed by NIST, OMB frameworks, and our own empirical research — to assess where agencies are and what the next stage requires.

Meaningful Oversight Criteria

We develop operational definitions of what "meaningful human oversight" of AI looks like in practice — translating policy requirements into observable, measurable organizational behaviors.

Mission-Outcome Linkage

Technology metrics alone are insufficient. Our frameworks always connect AI system performance to mission outcomes — the actual results agencies exist to produce.

Comparative Benchmarking

Patterns across agencies reveal what works. We apply the benchmarking methodology developed during SPR's 250+ enterprise assessments to identify best practices and warning signs.

How We Treat Uncertainty

AI governance is a young field, and the evidence base is still forming. ThinkCapital is explicit about what is established, what is emerging, and what remains speculative. We distinguish between:

This distinction shapes how we communicate findings and how we recommend agencies use our frameworks.

The Reference Body of Knowledge

ThinkCapital's approach draws on and contributes to an evolving body of knowledge on AI governance — including NIST RMF guidance, OMB policy, and research from peer institutions. Our Body of Knowledge page provides a curated reference to the most significant external work informing our research agenda.

Explore the Body of Knowledge →