Research Participation Prospectus — Version 1.0, February 2026
Full details on research design, participation requirements, data handling, privacy rights, and what participants receive.
Context
Why This Research Is Needed Now
Agencies across federal and state government are operating under formal AI governance requirements. NIST AI Risk Management Framework adoption is accelerating. Oversight mechanisms are being documented, approved, and filed. Executive orders and OMB guidance have created a compliance infrastructure that did not exist three years ago.
What is largely absent from the literature is empirical research on whether any of it is working. Most governance commentary is normative: what agencies should do, what frameworks they should adopt, what policies should require. Research on what is actually happening in practice is nearly nonexistent. Are existing frameworks producing the accountability outcomes they were designed for?
That gap matters because governance frameworks that look good on paper and function well under operational conditions are two different things. Measurement discipline is what separates frameworks that hold from frameworks that merely perform.
The Connective Thread: Measurement
Neither governance effectiveness nor oversight quality can be evaluated without agreed measurement criteria. ThinkCapital's contribution is methodological: applying systematic assessment frameworks developed through hundreds of enterprise IT evaluations and informed by international measurement standards bodies including IFPUG, NESMA, and COSMIC. The goal is empirical findings that practitioners can use immediately.
Research Design
Research Streams
The initiative comprises two active streams plus two forming streams, each addressing a distinct dimension of public sector AI governance.
NIST AI RMF Implementation in Government Practice
The NIST AI Risk Management Framework is rapidly becoming the default governance reference for federal and state agencies. Yet there is almost no empirical research on how it performs in practice across different organizational contexts.
This stream examines implementation patterns across agency types, sizes, and mission profiles through structured interviews with government IT leaders who have direct deployment or evaluation experience. Key dimensions include framework adaptation to agency context, measurement of implementation fidelity, and conditions under which RMF adoption produces durable governance versus audit-cycle compliance.
Human Oversight Quality in AI-Augmented Government Operations
Most AI governance requirements treat human oversight as binary: present or absent. The more consequential variable is whether oversight mechanisms constitute genuine controls or documented assumptions.
This stream develops a typology of oversight models observed across government AI deployments, tests them against real operational scenarios, and constructs a measurement framework for evaluating oversight quality. Key dimensions include oversight mechanism design across agentic and assistive AI systems, intervention point architecture, audit trail integrity, and the conditions under which oversight requirements produce functional accountability versus rubber-stamp compliance.
Forming
Additional Streams Under Development
AI Adoption Thresholds & Organizational Tipping Points
Applying Schelling-Granovetter threshold models to explain why identical AI initiatives succeed in some agencies and stall in others — and how to move the needle.
Express Interest →AI Productivity Measurement for Government Missions
Developing empirical methods for measuring whether AI tools are delivering measurable improvements in government mission contexts — not just technology metrics.
Express Interest →How We Work
Methodology
Both active streams apply comparative assessment methodology developed through three decades of government IT research. The approach is empirical and practitioner-oriented, grounded in systematic assessment disciplines pioneered at Software Productivity Research and refined through advisory engagements with federal agencies, defense organizations, and state governments.
Structured Practitioner Interviews
Primary data collection through 30–45 minute structured interviews with government IT leaders, AI governance practitioners, and policy implementers with direct deployment or evaluation experience within the past 18 months.
Comparative Framework Analysis
Cross-agency analysis of governance documentation, implementation records, and self-reported outcomes using standardized assessment criteria drawn from international measurement standards work.
Pattern Identification and Framework Development
Synthesis of interview and documentary findings into actionable decision frameworks for practitioners — distinguishing conditions that produce durable governance from those that produce compliance theater.
Schedule
Research Timeline
Research Design and Participant Recruitment
Finalizing interview instruments, recruiting practitioner participants across both streams, establishing comparative assessment criteria and baseline documentation protocols.
Primary Data Collection
Conducting structured interviews, collecting and coding documentary evidence, iterative framework development alongside data collection.
Analysis and Draft Findings
Cross-case analysis, pattern identification, development of practitioner decision frameworks. Draft findings shared with participants for review before publication.
Publication and Dissemination
Final findings published through ThinkCapital and submitted to relevant practitioner and academic venues. Participant sector briefings available on request.
Get Involved
Participate in This Research
We are conducting structured interviews with government IT leaders, AI governance practitioners, and policy implementers who have direct experience with AI deployment or governance framework implementation in federal, state, or local government settings.
Participation involves a single 30–45 minute structured interview. Participants receive early access to preliminary findings and may be acknowledged by name or participate anonymously.
We are particularly interested in practitioners with direct experience implementing or evaluating AI governance frameworks in the past 18 months. Accounts of difficulty or partial implementation are as valuable as accounts of success.