GIAG — A ThinkCapital Research Program

Active Research Program • Launched 2026

Government IT/AI Governance Initiative

A comparative research program examining measurement discipline and accountability frameworks in public sector AI deployment. We produce findings practitioners can actually use.

Research Participation Prospectus — Version 1.0, February 2026

Full details on research design, participation requirements, data handling, privacy rights, and what participants receive.

Download Prospectus (PDF) I'm Interested →

Why This Research Is Needed Now

Agencies across federal and state government are operating under formal AI governance requirements. NIST AI Risk Management Framework adoption is accelerating. Oversight mechanisms are being documented, approved, and filed. Executive orders and OMB guidance have created a compliance infrastructure that did not exist three years ago.

What is largely absent from the literature is empirical research on whether any of it is working. Most governance commentary is normative: what agencies should do, what frameworks they should adopt, what policies should require. Research on what is actually happening in practice is nearly nonexistent. Are existing frameworks producing the accountability outcomes they were designed for?

That gap matters because governance frameworks that look good on paper and function well under operational conditions are two different things. Measurement discipline is what separates frameworks that hold from frameworks that merely perform.

The Connective Thread: Measurement

Neither governance effectiveness nor oversight quality can be evaluated without agreed measurement criteria. ThinkCapital's contribution is methodological: applying systematic assessment frameworks developed through hundreds of enterprise IT evaluations and informed by international measurement standards bodies including IFPUG, NESMA, and COSMIC. The goal is empirical findings that practitioners can use immediately.

Research Streams

The initiative comprises two active streams plus two forming streams, each addressing a distinct dimension of public sector AI governance.

Stream One  •  Active

NIST AI RMF Implementation in Government Practice

The NIST AI Risk Management Framework is rapidly becoming the default governance reference for federal and state agencies. Yet there is almost no empirical research on how it performs in practice across different organizational contexts.

"Where is NIST AI RMF adoption generating genuine risk reduction, and where is it producing compliance documentation without corresponding accountability?"

This stream examines implementation patterns across agency types, sizes, and mission profiles through structured interviews with government IT leaders who have direct deployment or evaluation experience. Key dimensions include framework adaptation to agency context, measurement of implementation fidelity, and conditions under which RMF adoption produces durable governance versus audit-cycle compliance.

Stream Two  •  Active

Human Oversight Quality in AI-Augmented Government Operations

Most AI governance requirements treat human oversight as binary: present or absent. The more consequential variable is whether oversight mechanisms constitute genuine controls or documented assumptions.

"What does meaningful human oversight actually require at scale, and how does current government practice compare against any reasonable operational definition of it?"

This stream develops a typology of oversight models observed across government AI deployments, tests them against real operational scenarios, and constructs a measurement framework for evaluating oversight quality. Key dimensions include oversight mechanism design across agentic and assistive AI systems, intervention point architecture, audit trail integrity, and the conditions under which oversight requirements produce functional accountability versus rubber-stamp compliance.

Additional Streams Under Development

○ Forming

AI Adoption Thresholds & Organizational Tipping Points

Applying Schelling-Granovetter threshold models to explain why identical AI initiatives succeed in some agencies and stall in others — and how to move the needle.

Express Interest →
○ Forming

AI Productivity Measurement for Government Missions

Developing empirical methods for measuring whether AI tools are delivering measurable improvements in government mission contexts — not just technology metrics.

Express Interest →

Methodology

Both active streams apply comparative assessment methodology developed through three decades of government IT research. The approach is empirical and practitioner-oriented, grounded in systematic assessment disciplines pioneered at Software Productivity Research and refined through advisory engagements with federal agencies, defense organizations, and state governments.

Structured Practitioner Interviews

Primary data collection through 30–45 minute structured interviews with government IT leaders, AI governance practitioners, and policy implementers with direct deployment or evaluation experience within the past 18 months.

Comparative Framework Analysis

Cross-agency analysis of governance documentation, implementation records, and self-reported outcomes using standardized assessment criteria drawn from international measurement standards work.

Pattern Identification and Framework Development

Synthesis of interview and documentary findings into actionable decision frameworks for practitioners — distinguishing conditions that produce durable governance from those that produce compliance theater.

This research builds on measurement frameworks developed through engagement with IFPUG, COSMIC, and others, and is informed by ongoing collaboration with researchers in the software measurement and AI governance communities. ThinkCapital is an independent research organization; this program is not commissioned by or affiliated with any AI vendor, platform provider, or government agency.

Research Timeline

1
Q1 2026 — Current

Research Design and Participant Recruitment

Finalizing interview instruments, recruiting practitioner participants across both streams, establishing comparative assessment criteria and baseline documentation protocols.

2
Q2 2026

Primary Data Collection

Conducting structured interviews, collecting and coding documentary evidence, iterative framework development alongside data collection.

3
Q3 2026

Analysis and Draft Findings

Cross-case analysis, pattern identification, development of practitioner decision frameworks. Draft findings shared with participants for review before publication.

4
Q4 2026

Publication and Dissemination

Final findings published through ThinkCapital and submitted to relevant practitioner and academic venues. Participant sector briefings available on request.

Get Involved

Participate in This Research

We are conducting structured interviews with government IT leaders, AI governance practitioners, and policy implementers who have direct experience with AI deployment or governance framework implementation in federal, state, or local government settings.

Participation involves a single 30–45 minute structured interview. Participants receive early access to preliminary findings and may be acknowledged by name or participate anonymously.

Express Interest in Participating Download Full Prospectus

We are particularly interested in practitioners with direct experience implementing or evaluating AI governance frameworks in the past 18 months. Accounts of difficulty or partial implementation are as valuable as accounts of success.