About These Publications
What We Publish and Why
ThinkCapital publications span three formats. The Government AI in Practice newsletter delivers research analysis and field observations to practitioners on a regular schedule. Short-form research articles address specific governance questions with enough depth to be useful without requiring a full working paper. The GIAG Research Series working papers and technical methods documents are the most substantive output — intended for researchers, senior practitioners, and policy audiences who need the underlying argument and evidence, not just the conclusions.
All publications are freely available. Working papers and technical methods documents may be cited with attribution for non-commercial research and professional purposes.
Subscribe to Government AI in Practice
The newsletter is published on Substack. New issues go to subscribers first, with archived issues available here. If the research questions GIAG is working on are relevant to your work, the newsletter is the fastest way to stay current.
Research Newsletter
2 IssuesResearch Articles
2 ArticlesShort-form analytical pieces on specific government IT and AI governance questions, distributed through professional networks. Each develops a focused argument grounded in the same measurement discipline as the longer GIAG working papers.
Your AI Governance Framework Won’t Save You. Your Contract Might.
March 23, 2026
The Pentagon–Anthropic–OpenAI sequence of late February and early March 2026 as a live case study in AI governance architecture. The dispute was not resolved by NIST RMF compliance, OMB memoranda, or any risk management documentation. It was resolved by contract language.
Examines the supply chain risk designation as a governance architecture story, and draws the implication for every CIO whose AI contract language has not received the same scrutiny as their risk documentation.
The AI Threshold Problem Government IT Can’t Measure
February 6, 2026
Government IT leaders face competing mandates to modernize with AI while maintaining digital sovereignty. The problem is not lack of metrics — agencies are accumulating AI KPIs — but that measurement frameworks built for earlier technology generations cannot price what sovereignty actually costs.
Develops the threshold question that matters for state CIO investment decisions and argues for measurement frameworks built around decision logic rather than activity metrics.
Working Papers & Technical Methods
GIAG Research SeriesThe GIAG Research Series documents the theoretical and empirical foundations of the initiative’s research streams. Working papers develop the core arguments. Technical methods papers document the measurement approaches applied. These are the reference documents underlying the newsletter analysis and practitioner articles.
Implementation Fidelity: Why AI RMF Adoption Metrics Are Measuring the Wrong Thing
GIAG Research Series — March 2026
Defines implementation fidelity as the degree to which a governance framework changes actual decision behavior — and distinguishes it from documentation compliance, adoption rates, and reporting scores, which current practice conflates with it.
Draws on the software measurement community’s resolution of the lines-of-code problem to argue that the same conceptual move is required in AI governance. Develops the measurement framework for GIAG Stream One and introduces three concepts that current practice incorrectly treats as proxies for implementation fidelity.
Functional Sizing as a Foundation for AI Governance Measurement
GIAG Research Series — March 2026 · Applying Function Point Analysis and COSMIC to AI System Scope and Complexity
Documents the application of Function Point Analysis and the COSMIC functional size measurement method to the problem of AI system scope characterization. Argues that governance frameworks built on adoption metrics fail at the same structural level that pre-FPA software metrics failed.
Applies Albrecht’s FPA methodology — validated by Capers Jones at Software Productivity Research across 250+ enterprise assessments — to AI system scope characterization, then develops COSMIC-based extensions for the internal computational behavior that FPA alone does not address.
Get Involved
Participate in This Research
GIAG is conducting structured interviews with government IT leaders, AI governance practitioners, and policy implementers with direct experience in federal, state, or local government AI deployment or oversight.
Participation is a single 30–45 minute interview. Participants receive early access to preliminary findings and may be acknowledged by name or participate anonymously.
Accounts of difficulty or partial implementation are as valuable as accounts of success. Direct experience within the past 18 months is the primary qualifier.