GIAG Research Output • 2026

Publications

Research newsletters, practitioner articles, and working papers from the Government IT/AI Governance Initiative. All publications are freely available for download.

What We Publish and Why

ThinkCapital publications span three formats. The Government AI in Practice newsletter delivers research analysis and field observations to practitioners on a regular schedule. Short-form research articles address specific governance questions with enough depth to be useful without requiring a full working paper. The GIAG Research Series working papers and technical methods documents are the most substantive output — intended for researchers, senior practitioners, and policy audiences who need the underlying argument and evidence, not just the conclusions.

All publications are freely available. Working papers and technical methods documents may be cited with attribution for non-commercial research and professional purposes.

Subscribe to Government AI in Practice

The newsletter is published on Substack. New issues go to subscribers first, with archived issues available here. If the research questions GIAG is working on are relevant to your work, the newsletter is the fastest way to stay current.

Subscribe on Substack →

Research Newsletter

2 Issues

Research Articles

2 Articles

Short-form analytical pieces on specific government IT and AI governance questions, distributed through professional networks. Each develops a focused argument grounded in the same measurement discipline as the longer GIAG working papers.

Research Article

Your AI Governance Framework Won’t Save You. Your Contract Might.

March 23, 2026

The Pentagon–Anthropic–OpenAI sequence of late February and early March 2026 as a live case study in AI governance architecture. The dispute was not resolved by NIST RMF compliance, OMB memoranda, or any risk management documentation. It was resolved by contract language.

“The operational governance that actually constrains AI behavior in deployment does not live in policy frameworks. It lives in contract terms, technical configurations, and vendor relationships.”

Examines the supply chain risk designation as a governance architecture story, and draws the implication for every CIO whose AI contract language has not received the same scrutiny as their risk documentation.

Procurement M-25-22 Contract Governance Vendor Risk Supply Chain CIO Decision-Making
Download PDF PDF  ·  2 pages
Research Article

The AI Threshold Problem Government IT Can’t Measure

February 6, 2026

Government IT leaders face competing mandates to modernize with AI while maintaining digital sovereignty. The problem is not lack of metrics — agencies are accumulating AI KPIs — but that measurement frameworks built for earlier technology generations cannot price what sovereignty actually costs.

“At what threshold does AI process automation become mission-critical enough to require sovereign controls? You can’t measure jurisdictional control in the same framework you use to measure server utilization.”

Develops the threshold question that matters for state CIO investment decisions and argues for measurement frameworks built around decision logic rather than activity metrics.

AI Thresholds Digital Sovereignty Measurement State CIOs ROI Frameworks Mission-Critical AI
Download PDF PDF  ·  1 page

Working Papers & Technical Methods

GIAG Research Series

The GIAG Research Series documents the theoretical and empirical foundations of the initiative’s research streams. Working papers develop the core arguments. Technical methods papers document the measurement approaches applied. These are the reference documents underlying the newsletter analysis and practitioner articles.

Working Paper  •  WP-1

Implementation Fidelity: Why AI RMF Adoption Metrics Are Measuring the Wrong Thing

GIAG Research Series  —  March 2026

Defines implementation fidelity as the degree to which a governance framework changes actual decision behavior — and distinguishes it from documentation compliance, adoption rates, and reporting scores, which current practice conflates with it.

“Current AI RMF adoption metrics count governance documentation activity. They measure the governance equivalent of lines of code: technically precise, functionally uninformative about what the governance system delivers.”

Draws on the software measurement community’s resolution of the lines-of-code problem to argue that the same conceptual move is required in AI governance. Develops the measurement framework for GIAG Stream One and introduces three concepts that current practice incorrectly treats as proxies for implementation fidelity.

NIST AI RMF Implementation Fidelity Governance Metrics Measurement Frameworks Function Points Capers Jones M-25-21
Technical Methods  •  D-1

Functional Sizing as a Foundation for AI Governance Measurement

GIAG Research Series  —  March 2026  ·  Applying Function Point Analysis and COSMIC to AI System Scope and Complexity

Documents the application of Function Point Analysis and the COSMIC functional size measurement method to the problem of AI system scope characterization. Argues that governance frameworks built on adoption metrics fail at the same structural level that pre-FPA software metrics failed.

“Adoption rates, documentation scores, and compliance checklists in AI governance represent the same category of failure as lines-of-code metrics. They describe activity at the implementation layer without reaching the functional layer where governance either works or does not work.”

Applies Albrecht’s FPA methodology — validated by Capers Jones at Software Productivity Research across 250+ enterprise assessments — to AI system scope characterization, then develops COSMIC-based extensions for the internal computational behavior that FPA alone does not address.

Function Point Analysis COSMIC Software Measurement AI Scope Governance Metrics IFPUG SPR
Citation and use. Working papers and technical methods documents are copyright ThinkCapital LLC. They may be cited and shared for non-commercial research and professional purposes with attribution. Suggested format: Bragen, M. (2026). [Title]. ThinkCapital GIAG Research Series. ThinkCapital LLC. thinkcapital.org/publications.html — For other uses, contact via the Engage page.

Get Involved

Participate in This Research

GIAG is conducting structured interviews with government IT leaders, AI governance practitioners, and policy implementers with direct experience in federal, state, or local government AI deployment or oversight.

Participation is a single 30–45 minute interview. Participants receive early access to preliminary findings and may be acknowledged by name or participate anonymously.

Express Interest in Participating View Research Program

Accounts of difficulty or partial implementation are as valuable as accounts of success. Direct experience within the past 18 months is the primary qualifier.