Tax Filing as Easy as Mobile Banking: Creating Product-Driven Government
Americans trade stocks instantly, but spend 13 hours on tax forms. They send cash by text, but wait weeks for IRS responses. The nation’s revenue collector ranks dead last in citizen satisfaction. The problem isn’t just paperwork — it’s how the government builds.
The fix: build for users, not compliance. Ship daily, not yearly. Cultivate talent, don’t rent it. Apple doesn’t outsource the creation of its products; the IRS shouldn’t outsource taxpayer experience. Why?
The goal: make taxes as easy as mobile banking.
The IRS, backed by a Congress and an administration that truly wants real improvements and efficiencies, must invest in building its tax products in house. Start with establishing a Chief Digital Officer (CDO) at the IRS directly reporting to the Commissioner. This CDO must have the authority to oversee digital and business transformation across the organization. This requires hiring hundreds of senior engineers, product managers, and designers—all deeply embedded with IRS accountants, lawyers, and customer service agents to rebuild taxpayer services. This represents true government efficiency: redirecting contractor spending to fund internal teams that build what American taxpayers should own rather than rent.
This is about more than broken technology. This is a roadmap for building modern, user-centric government organizations. The IRS touches every American, making it the perfect lab for proving the government can work.
Transform the IRS first, then apply these principles across every agency where citizens expect digital experiences that actually work.
Challenge & Opportunity
It’s April 15th. For the first time, you’re not fretting.
You finished filing your taxes on a free app. It took 15 minutes. Your income? Already there. Your credits? Pre-calculated and ready to claim. Your refund? Hitting your bank account tomorrow.
For millions around the world, swift, painless tax filing isn’t a dream. It’s the norm. It should be for Americans, too.
But in the U.S., the IRS experience is still slow, opaque, process-heavy, and frustrating. Tax filing is one of the few universal interactions Americans have with their government—and it’s not one that earns much trust.
It doesn’t have to be this way. We were on the path to delivering that with IRS Direct File and needed to recommit. To deliver wildly easier taxes for Americans, we can, and must, build an IRS that meets high modern expectations: fast, transparent, digital-first, and relentlessly taxpayer-focused.
The Diagnosis
Each year, the IRS collects more than 96% of the revenue that funds the federal government—$5.1 trillion supporting everything from Social Security, defense, infrastructure, veterans’ services, and investing in America’s future.
The quote from Justice Oliver Wendell Holmes, carved into the limestone face of the IRS headquarters in D.C., captures the spirit well:
“Taxes are what we pay for civilized society.”
It is not only essential to the functioning of government—it is also a major way most Americans interact with it. And that experience? Frustrating, costly, and confusing. According to a recent Pew survey, Americans rate the IRS less favorably than any other federal agency. The average taxpayer spends 13 hours and $270 out of pocket just to file their return.
The core problem: The IRS needs to be user-focused.
Despite the stakes, the IRS operates far behind what Americans expect. We live in a world where people can tap to pay, split bills by text, or trade stocks in slick apps. But that world does not include the IRS.
A staggering 63% of the 10.4 billion hours Americans spend dealing with the federal government are consumed by IRS paperwork. But much of the source of that pain isn’t the IRS, but Congress with the crushing complexity of decades long tax code changes, sedimented on top of each other. This year was no different. The “One Big Beautiful Bill” runs 331 pages, with large swaths devoted to new, intricate tax changes.
Dealing with the IRS still often involves paper forms, long phone waits, chasing down documents, and confusing processes.
If you’ve dealt with the IRS for anything beyond filing, it feels impossible to get a task finished. Will someone pick up the phone? Can I get an answer to my questions and resolve my situation? Would I expect the same answer if I talked to someone else? Last year the IRS answered just 49% of the 100 million calls it received, including automated answering.
This underperformance is beyond outdated technology—it’s structural and institutional. The IRS’s core systems are brittle and fragmented. Ancient procurement rules and funding constraints have made sustained modernization nearly impossible. Siloed organizations sit within siloes. In place of long-term investment, the agency leans heavily on short-term contractor fixes, band-aids applied to legacy wounds.
This complexity has stymied scaled change.
The root cause: The IRS has never treated world-class technology and product development as mission-critical capabilities core to its identity, to be hired, owned, and continually improved by internal teams focused on user outcomes.
A modern service agency builds end-to-end experiences for users—from pre-populating data through to filing and refunds. Empowered teams building these features have a holistic viewpoint and control over their service to ensure taxpayers are able to repeatedly and reliably complete their task.
Today’s reality is different: federal agencies like the IRS treat technical and product expertise as afterthoughts—all nice-to-haves that serve bureaucratic processes rather than core capabilities essential to their mission. Strategy and execution get outsourced by default. This creates a growing divide between “business” and “IT” teams, each lacking a deep understanding of the other’s work, despite both being critical to delivering services that actually function for taxpayers.
This outsourcing has hollowed out the agency’s internal technical capacity. Rather than building technical competency in-house, and paying that talent a salary approaching private companies, the IRS grows more dependent on vendors. It no longer knows what it needs technically, what questions to ask or which paths to pursue. Instead they must trust the vendors–companies financially incentivised towards ballooning scopes, lock-in, and complexity.
The result: a siloed experience that mirrors a siloed organization, one that is risk-averse, paper-heavy IRS, too slow to meet modern expectations.
The agency approaches service delivery as a compliance and bureaucratic process to digitize, rather than a product to design. “Never ship your org-chart” is a common refrain you’ll hear at tech companies, to explain how products tend to take on the communication style of their builders. Yet IRS product faultlines visibly follow its org structure and thus fail to deliver a holistic experience.
There were bright spots. Direct File showed what’s possible when empowered teams build for users. A dead simple idea: let Americans file taxes directly on the IRS site was a reality. It worked. It was well regarded. In surveys, users beamed about Direct File: 9 out of 10 gave it an “excellent” or “above average” rating, 74% said they preferred it over what they used before, and 86% said it increased their trust in the IRS.
The government actually delivered for its citizens, and they felt it.
But it didn’t last. The project was abruptly dismantled due to political ideology, not taxpayer experience or feedback.
Many of the people with the technical skills and vision to modernize the IRS have left, often without a choice. The agency will likely slide further backward—into deeper dependence on systems built by the lowest bidder or those currying political favor, with poorer service and diminished public trust in return.
We’ve seen this up close.
Both of us worked at The White House’s technology arm; the U.S. Digital Service. One of us helped lead Direct File into existence and built the Consumer Financial Protection Bureau’s digital team. The other previously led Google’s first large language model products and prototyped AI tools at the IRS to streamline internal knowledge work.
In our work at the IRS, we witnessed how far the agency must go. Inside the IRS Commissioner’s office, with leaders across the agency, we built a collaborative digital strategic plan. This memo details those proposals since left by the wayside after seven different IRS commissioners rotated in the seat, just this year.
The IRS needs more than modernization. It will need a systemic rebuild from:
- compliance, to user-centered design and product thinking
- vendor dependence to empowered internal product teams
- once-a-year panic to real-time, year-round services
- fragile mainframes to composable platforms and APIs
- waterfall contracting to iterative, continuous delivery
We’re sharing these recommendations for a future Day One—when there’s a refocus on rebuilding the government. When that day comes, the blueprint will be here: drawn from inside experience, built on hard lessons, and focused on what it will take to deliver a digital IRS that truly works for the American people.
What we need is the mandate to build a tax system that makes Americans think: “That was it? That was easy.”
Plan of Action
The IRS must rebuild taxpayer services around citizen needs rather than compliance and bureaucratic processes. This requires in-housing the talent to strategically build it. We propose establishing a Chief Digital Officer directly reporting to the Commissioner, with the authority to oversee digital and business transformation across the organization, hire hundreds of senior engineers, product managers, and designers. The goal, a team empowered to deliver a tax-filling product experience that meets modern expectations.
The Products
Build for Users, Not Internal Compliance
We’ve become accustomed to a user-focused fit-and-finish in the app era. Let’s deliver that same level for taxpayers.
It all starts around building a digital platform that empowers taxpayers, businesses, and preparers with the information, tools and services to handle taxes accurately and confidently. A fully-featured online account becomes the one-stop, self-service hub for all tasks. Taxpayers access their complete tax profile, updated in real-time, with current data across income sources, financial institutions, and full tax history. The system proactively recommends tax breaks, credits, and withholding adjustments they’re eligible for.
Critically, this can’t be built in a vacuum. It requires rapid iteration with users as part of a constant feedback loop. This digital platform runs on robust APIs that power internal tools, IRS public sites, and third-party software. Building this way ensures alignment across IRS teams, eliminates duplicate efforts, and lifts the entire tax software ecosystem.
This is what we need to build for Americans:
Online tax filing: From annual panic to year-round readiness
Reboot Direct File. Stop forcing everything into tax season. Let taxpayers update information year-round—add a child, change addresses, adjust withholdings, upload documents. When April arrives, their return is already 90% complete.
This is a natural evolution of Direct File and the existing non-editable online account dashboard into a living, breathing system taxpayers optimize throughout the year. And not just for individuals—this should be extended to businesses—reducing this burden for as many filer types as possible.
Pre-populated returns: Stop making people provide what the IRS already knows
The IRS already has W-2s, 1099s, and financial data. Use it. Pre-populate returns to cut filing time from hours to minutes. Deliver secure APIs so any tax software can access IRS data (with taxpayer permission), and use machine learning to flag issues including fraud before submission. This increases accuracy, reduces errors, and spurs competition by making it easy to switch between tax-filing programs.
Income verification as a service: Turn tax data into financial opportunity
The IRS sits on verified income data that could help Americans access government services, credit, mortgages, and benefits like student aid. Instead of weeks-long transcript requests, offer instant verification through secure APIs. This creates a government-backed source alongside credit bureaus, increases financial access, and reduces paperwork across all government services.
Tax calculator as a platform: One source of truth
Every tax software company recreates the same calculations, each slightly different. Across the organization, the IRS itself uses multiple third-party tax calculators in audits. This should be a core, integral service the IRS offers—build a definitive tax calculator as an API, the single source of truth that internal audits and checks use, and external software can access or run on their own. Make it transparent, auditable, and open source. Put up cash “bounties” to encourage the public to find bugs and errors and invite taxation-critics to review the code. Use generative AI to aid IRS accountants, lawyers and engineers translate tax law changes into code–speeding the roll out of Congressional tax changes.
When everyone calculates taxes the same way the IRS does, errors vanish. When everyone can see how the IRS does it, trust grows.
Modern MeF: From submission pipe to intelligent platform
Today’s Modernized e-File (MeF) is barely modern—it’s a dumb pipe that accepts tax returns and hopes for the best. Transform it into intelligent infrastructure that validates in real-time, catches errors immediately (not weeks later in confusing notices), and stops fraud before refunds are deposited. Build it like a real API, not XML dumps. Enable multi-part submissions so taxpayers can fix mistakes without starting over. This isn’t just a technical upgrade—it’s the foundation that makes every other improvement possible.
The Process
Ship Daily, Not Yearly
Taxpayer-first product development
The IRS is the single largest interaction point between Americans and their government. Every improvement saves millions of hours and builds trust. This requires abandoning bureaucratic processes for product thinking.
Build with taxpayers from day one through constant user testing and feedback loops. Organize around taxpayer journeys—”I need to update my withholdings” or “I’m checking my refund”—not org charts.
Measure what matters: time-to-file, satisfaction scores, error rates, not only compliance metrics. Internal Objectives and Key Results planning makes priorities clear and syncs the organization towards focused goals. Publish Service Level Objectives on external products to ensure we target creating systems that others can confidently rely and build on.
Give full-stack product teams the authority to make integrated technical, design, policy and legal decisions together. Staff these teams with internal technologists embedded alongside accountants and lawyers in functional organizations, building IRS competency while reducing contractor dependence. Today’s IRS is highly siloed across functions with authority so fragmented it’s unclear who “owns” what. Yet go to any top tech organization and you’ll see what we’re pushing for: aligned and cross-functional teams whose job is delivering with clear ownership. Inherently we’re pushing for more than a new team, we’re factoring out unclear ownership in general away from IT and Business Divisions.
When teams own outcomes, we can better ensure taxpayer experience transforms from painful to painless.
API-first architecture
The IRS is fundamentally a data organization, yet information flows through siloed systems that can’t talk to each other. Amazon solved this with a simple mandate: all teams must expose their data and communicate through APIs. (This mindset set in motion the seeds of Amazon Web Services, the company’s most profitable division).
The IRS needs the same revolution.
Every team exposes data and functionality through standardized REST APIs—no direct database access, no per-department clones of the data, no exceptions. Design every API to be externalizable (with strong access controls) from day one, unlocking government APIs to become platforms for innovation. When systems communicate through versioned APIs instead of tangled dependencies, teams can ship improvements daily without compromising everything else. This isn’t just technical architecture—it’s how modern organizations move fast without breaking things.
The People
Cultivate it, Don’t Outsource It; Build a Delivery Culture
A digital IRS that delivers for Americans cannot be built by the lowest bidder. Its core capability isn’t digitized forms–it’s people who can understand taxpayers’ needs, imagine solutions, design thoughtfully, ship them fast, listen to users, and keep improving based on feedback.
Silicon Valley understands this instinctively on two fronts. The fight for great engineers is the fight to build teams that can deliver great products. And two, no leading tech company outsources its own R&D. Delivering well-functioning and beloved products requires tight ownership of the product iteration loop.
Businesses long learned to never outsource a core competency. OpenAI would never outsource the training of its models, Apple its industrial design, Google its search algorithm, or Facebook its social graph. The same should be true for the IRS.
Yet, despite accepting 93% of its tax returns digitally, it still does not consider itself to be a digital-first agency. Building great teams is inseparable from building great taxpayer experiences. For decades, the agency has outsourced its technical mission and vision.
What we witnessed at the IRS was often vendor theater. Consultants transformed routine meetings into sales presentations that should have been dedicated to improving the products. Solutions specialists added layers of proprietary middleware, despite readily available enterprise-grade open source solutions running on commodity servers could easily meet the objectives. All of this unfolded within an organizational culture where securing contracts took precedence over delivering meaningful outcomes. Contracts that, of course, cost multiples more than the price of a competent internal team.
Commodities like cloud infrastructure or off-the-shelf software that serve broad, generic needs should absolutely be acquired externally. But the IRS’s critical, taxpayer-facing products—the systems at the heart of filing, payments, and taxpayer accounts—must be built and owned internally. There is only one agency that collects taxes for the United States of America.
When everything is handed to vendors, the IRS sends more than money out the door; it loses institutional memory, technical craft, quality systems, and the ability to move quickly. A modern IRS cannot be built on rented skills.
Talent: Build a Permanent Product Core
This transformation starts with the people: build and keep an in-house corps of top-tier technologists—engineers, product managers, designers, user experience researchers—working in small, empowered, cross-functional teams hand in hand with fellow IRS accountants, auditors, customer service representatives and lawyers. Not a handful of digital specialists scattered in a bureaucracy as it was, but several hundred people whose full-time job is delivering and evolving the IRS’s core taxpayer experiences and services.
- Create a dedicated Digital Profession inside the IRS, led by a Chief Digital Officer with the authority to hire, fire, and shape teams and technology stacks.
- Break the straitjacket of outdated civil service rules by creating specialist pay bands to compete for top talent like the CFPB has done.
- Empower cross-functional teams to ship without endless escalation. Start small, test early, iterate quickly, and make product decisions by those close to the work.
Funding: Invest in Teams, Not Projects
Current funding locks the IRS into one-off projects that end when the money runs out, leaving no path for iteration. A product-centered IRS needs enduring funding for enduring teams. Long-lived services, not short-lived milestones. This should be no surprise for a tax organization. There are two certainties in life; death and taxes. We should properly set ourselves up to manage the latter.
- Fund continuous development rather than one-and-done “delivery.”
- Tie funding to taxpayer outcomes like faster filing, fewer errors, higher satisfaction, instead of compliance checklists.
- Secure multi-year budgets for core product teams so they can improve services year-round, not scramble for appropriations each cycle.
This shift will reduce long-term capital costs and ensure that every dollar invested keeps improving the taxpayer experience.
Quality & Standards: Build Once, Build Right
Owning our products means owning their quality. That requires clear, enforceable service standards, like performance, usability, scalability, and accessibility, that every IRS product must meet.
- Establish service performance benchmarks and hold teams accountable to them. These should be highly taxpayer centric; time to file, support response time, ease of use.
- Create communities of practice inside the organization to share patterns, tooling, and lessons learned across the agency.
- Apply spend controls that tie contract renewals to measurable outcomes and prevent redundant vendor builds.
Culture Eats Strategy: Time to Invest in a Delivery Culture
“Culture eats strategy for breakfast,” as Peter Drucker famously said. Yet government agencies too often treat culture-building as off-limits or irrelevant. This is backwards. Creating a shared, collaborative culture centered on delivery isn’t just important; it’s the foundation that makes everything else possible. The hardest and most critical step is investing in people. Give employees space to collaborate meaningfully, contribute their expertise, and take ownership of outcomes. Leadership must empower teams with real authority, establish clear performance standards, and hold everyone accountable for meeting—or exceeding—those benchmarks. Without this cultural shift, even the best strategy becomes just another plan gathering dust.
When every product meets the same high standard, trust in the IRS will grow—because taxpayers will feel it in every interaction.
A template for all agencies
The IRS touches more Americans than any other federal agency–making it the perfect proof point that the government can deliver digital products that work seamlessly. The principles–build for users, not compliance, shipping daily, not yearly, and keeping the talent in house is not unique to the IRS.
We believe these goals and strategies apply to nearly every agency and level of government. Imagine Social Security retirement planning tools that lead to easy withholding adjustments, a Medicare/Medicaid that is easy to enroll in, or a FEMA with easy to file disaster relief disbursement.
Transform the IRS this towards this path, and then use these lessons to reset and lift up expectations between Americans and their government. One so easy citizens say: “That was it? That was easy.”
Analytical Literacy First: A Prerequisite for AI, Data, and Digital Fluency
As digital technologies reshape every aspect of society, students must be equipped and proficient in not only specialized literacies (such as digital literacy, data literacy, and AI literacy), but with a foundational skill set that allows them to think critically, reason logically, and solve problems effectively. Analytical literacy is the scaffolding upon which more specialized literacies are built. Students in the 21st century need strong critical thinking skills like reasoning, questioning, and problem-solving, before they can meaningfully engage with more advanced domains like digital, data, or AI literacy. Without these skills, students may struggle to engage critically with the technologies shaping their lives. We urge education leaders at the federal, state, and institutional levels to prioritize development of analytical literacy by incentivizing integration across disciplines, aligning standards, and investing in research and professional development.
Introduction
As society becomes increasingly shaped by digital technologies, data-driven decision-making, and artificial intelligence, the ability to think analytically is no longer optional, it’s essential. While digital, data, and AI literacies focus on domain-specific skills, analytical literacy enables students to engage with these domains critically and ethically. Analytical literacy encompasses critical thinking, logical reasoning, and problem-solving, and equips students to interpret complex information, evaluate claims, and make informed decisions. These skills are foundational not only for academic success but for civic engagement and workforce readiness in the 21st century.
Despite its importance, analytical literacy remains unevenly emphasized in K–12 education. These disparities are often driven by systemic inequities in school funding, infrastructure, and access to qualified educators. According to NCES’s Education Across America report, rural schools and those in under-resourced communities frequently lack the professional development opportunities, instructional materials, and technology needed to support analytical skill-building. In contrast, urban and well-funded districts are more likely to offer inquiry-based curricula, interdisciplinary projects, and formative assessment tools that foster deep thinking. Additionally, while some schools integrate analytical thinking through inquiry-based learning, project-based instruction, or interdisciplinary STEM curricula, there is no consistent national framework guiding its development at this time. Instructional strategies vary widely by state or district, and standardized assessments often prioritize procedural fluency over deeper cognitive engagement like analytical reasoning.
Recent research underscores the urgency of this issue. A 2024 literature review from the Center for Assessment highlights analytical thinking as a core competency for future success, noting its role in supporting other 21st-century skills such as creativity, collaboration, and digital fluency. Similarly, a systematic review published in the International Journal of STEM Education emphasizes the need for early engagement with analytical and statistical thinking to prepare students for a data-rich society.
There is growing consensus among educators, researchers, and policy advocates that analytical literacy deserves a more central role in K–12 education. Organizations such as NWEA and Code.org have called for stronger integration of analytical and data literacy skills into curriculum and professional development efforts. However, without coordinated policy action, these efforts remain fragmented.
This memo builds on that emerging momentum. It argues that analytical literacy should be treated as a skill that underpins students’ ability to engage meaningfully with digital, data, and AI literacies. By elevating analytical literacy through standards, instruction, and investment, we can ensure that all students are prepared to participate, innovate, and thrive in a complex and rapidly changing world.
To understand why analytical literacy must be prioritized, we examine the current landscape of specialized literacies and the foundational skills they require.
Challenges and Opportunities
In today’s interconnected world, digital literacy, data literacy, and AI literacy are no longer optional, they are essential skill sets for civic participation, economic mobility, and ethical decision-making. These literacies enable students to navigate online environments, interpret complex datasets, and engage thoughtfully with emerging technologies.
- Digital literacy encompasses the ability to use technology effectively and critically, including evaluating online information, understanding digital safety, and engaging ethically in digital environments.
- Data Literacy involves the capacity to understand, interpret, evaluate, and communicate data. This includes recognizing data sources, identifying patterns, and drawing informed conclusions.
- AI Literacy entails understanding the basic concepts of artificial intelligence, its applications, ethical implications, and how to interact with AI systems responsibly.
Together, these literacies form a cognitive toolkit that empowers students to be not just consumers of information and technology, but thoughtful participants in civic and digital life.
While these literacies address specific domains, they all fundamentally rely on what should be called Analytical Literacy. Analytical literacy, at its core, involves the ability to:
- Ask insightful questions. Identifying the core issues and seeking relevant information.
- Evaluate information critically. Assessing the credibility, bias, and relevance of sources.
- Identify patterns and relationships. Recognizing connections and trends in complex information.
- Reason logically. Constructing sound arguments and drawing valid inferences.
- Solve problems effectively. Applying analytical skills to find solutions and make informed decisions.
Yet, without structured development of these foundational skills, students risk becoming passive consumers of technology rather than active, informed participants. This presents an urgent opportunity: by centering Analytical Literacy in standards and assessment, instruction, and professional learning, we can create enduring pathways for students to participate, innovate, and thrive in an increasingly data-driven world.
Examples of implementation must include:
- In Standards and Assessment. States should revise academic standards to include grade-level expectations for analytical reasoning across disciplines. For example, middle school science standards might require students to construct evidence-based arguments using data, while high school civics assessments could include open-ended questions that ask students to evaluate competing claims in news media.
- In Instruction. Teachers should embed analytical skill development into daily practice through inquiry-based learning, Socratic seminars, or interdisciplinary projects. A math teacher could guide students in analyzing real-world datasets to identify trends and make predictions, while an English teacher might use argument mapping to help students deconstruct persuasive texts.
- In Professional Learning. Districts should offer workshops that train educators to use formative assessment strategies that surface student reasoning such as think-alouds, peer critiques, or performance tasks. Coaching cycles should focus on how to scaffold questioning techniques that push students beyond recall toward deeper analysis.
By embedding these practices systemically, we move from episodic exposure to analytical thinking toward a coherent, equitable framework that prepares all students for the demands of the digital age.
Addressing these gaps requires coordinated action across multiple levels of the education system. The following plan outlines targeted strategies for federal, state, and institutional leaders.
Plan of Action
To strengthen analytical literacy in K–12 education, we recommend targeted efforts from three federal offices, supported by state agencies, educational organizations, and teacher preparation programs.
Recommendation 1. Federal Offices
Federal agencies have the capacity to set national priorities, fund innovation, and coordinate cross-sector efforts. Their leadership is essential to catalyzing systemic change. For example:
White House Office of Science and Technology Policy (OSTP)
OSTP now chairs the newly established White House Task Force on Artificial Intelligence Education, per the April 2025 Executive Order on Advancing AI Education. This task force is charged with coordinating federal efforts to promote AI literacy and proficiency across the K–12 continuum. We recommend that OSTP:
- Expand the scope of the Task Force to explicitly include analytical literacy as a foundational competency for AI readiness.
- Ensure that public-private partnerships and instructional resources developed under the order emphasize reasoned decision-making as a core component, not just technical fluency.
- Use the Presidential Artificial Intelligence Challenge as a platform to showcase interdisciplinary student work that demonstrates analytical thinking applied to real-world AI problems.
This alignment would ensure that analytical literacy is not treated as an adjacent concern, but as a central pillar of the federal AI education strategy.
Institute of Education Sciences (IES)
IES should coordinate closely with the Task Force to support the Executive Order’s goals through a National Analytical Literacy Research Agenda. This agenda could:
- Fund studies that explore how analytical thinking supports AI literacy across grade levels.
- Evaluate the effectiveness of instructional models that integrate analytical reasoning into AI and computer science curricula.
- Develop scalable tools and assessments that measure students’ analytical readiness for AI-related learning pathways.
IES could also serve as a technical advisor to the Task Force, ensuring that its initiatives are grounded in evidence-based practice.
Office of Elementary and Secondary Education (OESE)
In light of the Executive Order’s directive for educator training and curriculum innovation, OESE should:
Prioritize analytical literacy integration in discretionary grant programs that support AI education.
Develop guidance for states on embedding analytical competencies into AI-related standards and instructional frameworks.
Collaborate with the Task Force to ensure that professional development efforts include training on how to teach analytical thinking—not just how to use AI tools.
National Science Foundation (NSF)
The National Science Foundation plays a pivotal role in advancing STEM education through research, innovation, and capacity-building. To support the goals of the Executive Order and strengthen analytical literacy as a foundation for AI readiness, we recommend that NSF:
- Establish a dedicated grant program focused on developing and scaling instructional models that integrate analytical literacy into STEM and AI education. This could include interdisciplinary curricula, project-based learning frameworks, and performance-based assessments that emphasize reasoning, problem-solving, and data interpretation.
- Fund research-practice partnerships that explore how analytical thinking develops across grade levels and how it supports students’ engagement with AI concepts. These partnerships could include school districts, universities, and professional organizations working collaboratively to design and evaluate scalable models.
- Support educator capacity-building initiatives, such as fellowships or professional learning networks, that equip teachers to foster analytical literacy in STEM classrooms. This aligns with NSF’s recent Dear Colleague Letters on expanding K–12 resources for AI education.
- Invest in technology-enhanced learning tools that provide real-time feedback on student reasoning and support formative assessment of analytical skills. These tools could be piloted in diverse school settings to ensure equity and scalability.
By positioning analytical literacy as a research and innovation priority, NSF can help ensure that K–12 students are not only technically proficient but cognitively prepared to engage with emerging technologies in thoughtful, ethical, and creative ways.
Note: Given the evolving organizational landscape within the U.S. Department of Education—including the elimination of offices like Educational Technology—it is critical to identify stable federal anchors. The agencies named above have longstanding mandates tied to research, policy innovation, and K–12 support, making them well-positioned to advance this work.
Recommendation 2. State Education Policymakers
While federal agencies can provide vision and resources, states hold the levers of implementation. Their role is critical in translating policy into classroom practice.
While federal agencies can provide strategic direction and funding, the implementation of analytical literacy must be led by states. Each state has the authority—and responsibility—to shape standards, assessments, and professional development systems that reflect local priorities and student needs. To advance analytical literacy meaningfully, we recommend the following actions:
Elevate Analytical Literacy in Academic Standards
States should conduct curriculum audits to identify where analytical skills are currently embedded—and where gaps exist. This process should inform the revision of academic standards across disciplines, ensuring that analytical literacy is treated as a foundational competency, not an ancillary skill. California’s ELA/ELD Framework, for example, emphasizes inquiry, argumentation, and evidence-based reasoning across subjects—not just in English language arts. Similarly, the History–Social Science Framework promotes critical thinking and source evaluation as core civic skills.
States can build on these models by:
- Developing cross-disciplinary analytical literacy frameworks that guide integration from elementary through high school.
- Embedding analytical competencies into STEM, humanities, and career technical education standards.
- Aligning revisions with the goals of the Executive Order, which calls for foundational skill-building to support digital and AI literacy.
Invest in Professional Development and Instructional Capacity
States should fund and scale professional learning ecosystems that equip educators to teach analytical thinking explicitly. This includes:
- Training on inquiry-based learning, Socratic dialogue, and formative assessment strategies that surface student reasoning.
- Development of microcredential pathways for educators to demonstrate expertise in fostering analytical literacy across content areas.
- Support for instructional coaches and teacher leaders to model analytical practices and mentor peers.
California’s professional learning modules aligned to the Common Core State Standards and ELA/ELD frameworks offer a useful starting point for designing scalable, standards-aligned training.
Redesign Student Assessments to Capture Deeper Thinking
States should move beyond traditional standardized tests and invest in assessment systems that measure analytical reasoning authentically. States can catalyze this innovation by issuing targeted Requests for Proposals (RFPs) that invite districts, assessment developers, and research-practice partnerships to design and pilot new models of assessment aligned to analytical literacy. These RFPs should prioritize:
- Performance tasks that require students to analyze real-world problems and propose solutions.
- Portfolio assessments that document students’ growth in reasoning and problem-solving over time.
- Open-ended questions that ask students to evaluate claims, synthesize evidence, and construct logical arguments.
- Scalable models that can inform statewide systems over time.
By using the RFP process strategically, states can surface promising practices, support local innovation, and build a portfolio of assessment approaches that reflect the complexity of students’ analytical capabilities.
Recommendation 3. Professional Education Organizations
Beyond government, professional education organizations shape the field through resources, advocacy, and collaboration. They are key partners in scaling analytical literacy.
Professional education organizations play a vital role in shaping the landscape of K–12 education. These groups—ranging from subject-specific associations like the National Council of Teachers of English (NCTE) and the National Science Teaching Association (NSTA), to broader coalitions like ASCD and the National Education Association (NEA)—serve as hubs for professional learning, policy advocacy, resource development, and field-wide collaboration. They influence classroom practice, inform state and federal policy, and support educators through research-based guidance and community-building.
Because these organizations operate at the intersection of practice, policy, and research, they are uniquely positioned to champion analytical literacy as a foundational skill across disciplines. To advance this work, we recommend the following actions:
- Develop Flexible, Discipline-Specific Resources. Create adaptable instructional materials—such as lesson plans, assessment templates, and classroom protocols—that help educators integrate analytical thinking into diverse subject areas. For example, NCTE could develop resources that support argument mapping in English classrooms, while NSTA might offer tools for teaching evidence-based reasoning in science labs.
- Advocate for Analytical Literacy as a National Priority. Publish position papers, host public events, and build strategic partnerships that elevate analytical literacy as essential to digital and civic readiness. Organizations can align their advocacy with the federal directive for AI education, emphasizing the role of analytical thinking in preparing students for ethical and informed engagement with emerging technologies.
- Foster Cross-Sector Collaboration. Convene working groups, research-practice partnerships, and educator networks to share best practices and scale effective models. For example, AERA could facilitate studies on how analytical literacy develops across grade levels, while CoSN might explore how digital tools can support real-time feedback on student reasoning.
By leveraging their convening power, subject-matter expertise, and national reach, professional education organizations can accelerate the adoption of analytical literacy and ensure it is embedded meaningfully into the fabric of K–12 education.
Recommendation 4. Teacher Preparation Programs
To sustain long-term change, we must begin with those entering the profession. Teacher preparation programs are the foundation for instructional capacity and must evolve to meet this moment.
Teacher preparation programs (TPPs) are the gateway to the teaching profession. Housed in colleges, universities, and alternative certification pathways, these programs are responsible for equipping future educators with the knowledge, skills, and dispositions needed to support student learning. Their influence is profound: research consistently shows that well-prepared teachers are the most important in-school factor for student success.
Yet many TPPs face persistent challenges. Too often, graduates report feeling underprepared for the realities of diverse, data-rich classrooms. Coursework may emphasize theory over practice, and clinical experiences vary widely in quality. Critically, few programs offer explicit training in how to foster analytical literacy—despite its centrality to digital, data, and AI readiness. In response to national calls for foundational skill-building and educator capacity, TPPs must evolve to meet this moment.
While federal funding for teacher preparation has become more limited, states are stepping in through innovative models like teacher residencies, registered apprenticeships, and microcredentialing pathways. These initiatives are often supported by modified use of Title II funds, state general funds, and workforce development grants. To accelerate this momentum, federal programs like Teacher Quality Partnership (TQP) grants and Supporting Effective Educator Development (SEED) grants could be adapted to prioritize analytical literacy, while states can issue targeted RFPs to redesign coursework, practicum experiences, and capstone projects that center reasoning, problem-solving, and ethical decision-making. To ensure that new teachers are ready to cultivate analytical thinking in their students, we recommend the following actions:
- Integrate Analytical Pedagogy into Coursework and Practicum. Embed instructional strategies that center analytical literacy into pre-service coursework. This includes training in inquiry-based learning, argumentation, and data interpretation. Practicum experiences should reinforce these strategies through guided observation and practice in real classrooms.
- Ensure Faculty Model Analytical Thinking. Faculty must demonstrate analytical reasoning in their own teaching—whether through modeling how to deconstruct complex texts, facilitating structured debates, or using data to inform instructional decisions. This modeling helps pre-service teachers internalize analytical habits of mind.
- Strengthen Field Placements for Analytical Instruction. Partner with districts to place candidates in classrooms where analytical literacy is actively taught. Provide structured mentorship from veteran teachers who use questioning techniques, performance tasks, and formative assessments to surface student reasoning.
- Develop Capstone Projects Focused on Analytical Literacy. Require candidates to complete a culminating project that demonstrates their ability to design, implement, and assess instruction that builds students’ analytical skills. These projects could be aligned with state standards and local district priorities.
- Align Program Outcomes with Emerging Policy Priorities. Ensure that program goals reflect the competencies outlined in federal initiatives like the AI Education Executive Order. This includes preparing teachers to support foundational literacies that enable students to engage critically with digital and AI technologies.
Together, these actions form a coherent strategy for embedding analytical literacy across the K–12 continuum. But success depends on bold leadership and sustained commitment. By reimagining teacher preparation through the lens of analytical literacy, we can ensure that every new educator enters the classroom equipped to foster deep thinking, ethical reasoning, and problem-solving—skills that students need to thrive in a complex and rapidly changing world.
Conclusion
Analytical literacy is not a nice-to-have, it is a prerequisite for the specialized proficiencies students need in today’s complex world. By embedding critical thinking, logical reasoning, and problem-solving across the K–12 continuum, we empower students to meet challenges with curiosity and discernment. We urge policymakers, educators, and institutions to act boldly by demanding analytical literacy be established as a cornerstone of 21st-century education. and co-create a future where every student has the analytical tools essential for meaningful participation, innovative thinking, and long-term success in the digital age and beyond.
Behavioral Economics Megastudies are Necessary to Make America Healthy
Through partnership with the Doris Duke Foundation, FAS is advancing a vision for healthcare innovation that centers safety, equity, and effectiveness in artificial intelligence. Inspired by work from the Social Science Research Council (SSRC) and Arizona State University (ASU) symposiums, this memo explores new research models such as large-scale behavioral “megastudies” and how they can transform our understanding of what drives healthier choices for longer lives. Through policy entrepreneurship FAS engages with key actors in government, research, academia and industry. These recommendations align with ongoing efforts to integrate human-centered design, data interoperability, and evidence-based decision-making into health innovation.
By shifting funding from small underpowered randomized control trials to large field experiments in which many different treatments are tested synchronously in a large population using the same objective measure of success, so-called megastudies can start to drive people toward healthier lifestyles. Megastudies will allow us to more quickly determine what works, in whom, and when for health-related behavioral interventions, saving tremendous dollars over traditional randomized controlled trial (RCT) approaches because of the scalability. But doing so requires the government to back the establishment of a research platform that sits on top of a large, diverse cohort of people with deep demographic data.
Challenge and Opportunity
According to the National Research Council, almost half of premature deaths (< 86 years of age) are caused by behavioral factors. Poor diet, high blood pressure, sedentary lifestyle, obesity, and tobacco use are the primary causes of early death for most of these people. Yet, despite studying these factors for decades, we know surprisingly little about what can be done to turn these unhealthy behaviors into healthier ones. This has not been due to a lack of effort. Thousands of randomized controlled trials intended to uncover messaging and incentives that can be used to steer people towards healthier behaviors have failed to yield impactful steps that can be broadly deployed to drive behavioral change across our diverse population. For sure, changing human behavior through such mechanisms is controversial, and difficult. Nonetheless studying how to bend behavior should be a national imperative if we are to extend healthspan and address the declining lifespan of Americans at scale.
Limitations of RCTs
Traditional randomized controlled trials (RCTs), which usually test a single intervention, are often underpowered, and expensive, and short-lived, limiting their utility even though RCTs remain the gold standard for determining the validity of behavioral economics studies. In addition, because the diversity of our population in terms of biology, and culture are severely limiting factors for study design, RCTs are often conducted on narrow, well-defined populations. What works for a 24-year-old female African American attorney in Los Angeles may not be effective for a 68-year-old male white fisherman living in Mississippi. Overcoming such noise in the system means either limiting the population you are examining through demographics, or deploying raw power of numbers of study participants that can allow post study stratification and hypothesis development. It also means that health data alone is not enough. Such studies require deep personal demographic data to be combined with health data, and wearables. In essence, we need a very clear picture of the lives of participants to properly identify interventions that work and apply them appropriately post-study on broader populations. Similarly, testing a single intervention means that you cannot be sure that it is the most cost-effective or impactful intervention for a desired outcome. This further limits the ability to deploy RCTs at scale. Finally, the data sometimes implies spurious associations. Therefore, preregistration of endpoints, interventions, and analysis of such studies will make for solid evidence development even if the most tantalizing outcomes come from sifting through the data later to develop new hypotheses that can be further tested.
Value of Megastudies
Professors Angela Duckworth and Katherine Milkman, at the University of Pennsylvania, have proposed an expansion of the use of megastudies to gain deeper behavioral insights from larger populations. In essence, megastudies are “massive field experiments in which many different treatments are tested synchronously in a large sample using a common objective outcome.” This sort of paradigm allows independent researchers to develop interventions to test in parallel against other teams. Participants are randomly assigned across a large cohort to determine the most impactful and cost-effective interventions. In essence, the teams are competing against each other to develop the most effective and practical interventions on the same population for the same measurable outcome.
Using this paradigm, we can rapidly assess interventions, accelerate scientific progress by saving time and money, all while making more appropriate comparisons to bend behavior towards healthier lifestyles. Due to the large sample sizes involved and deep knowledge of the demographics of participants, megastudies allow for the noise that is normal in a broad population that normally necessitates narrowing the demographics of participants. Further, post study analysis allows for rich hypothesis generation on what interventions are likely to work in more narrow populations. This enables tailored messaging and incentives to the individual. A centralized entity managing the population data reduces costs and makes it easier to try a more diverse set of risk-tolerant interventions. A centralized entity also opens the door to smaller labs to participate in studies. Finally, the participants in these megastudies are normally part of ongoing health interactions through a large cohort study or directly through care providers. Thus, they benefit directly from participation and tailored messages and incentives. Additionally, dataset scale allows for longer term study design because of the reduction in overall costs. This enables study designers to determine if their interventions work well over a longer period of time or if the impact of interventions wane and need to be adjusted.
Funding and Operational Challenges
But this kind of “apples to apples” comparison has serious drawbacks that have prevented megastudies from being used routinely in science despite their inherent advantage. First, megastudies require access to a large standing cohort of study participants that will remain in the cohort long term. Ideally, the organizer of such studies should be vested in having positive outcomes. Here, larger insurance companies are poor targets for organizing. Similarly, they have to be efficient, thus, government run cohorts, which tend to be highly bureaucratic, expensive, and inefficient are not ideal. Not everything need go through a committee. (Looking at you, All of Us at NIH and Million Veterans Program at the VA).
Companies like third party administrators of healthcare plans might be an ideal organizing body, but so can companies that aim to lower healthcare costs as a means of generating revenue through cost savings. These companies tend to have access to much deeper data than traditional cohorts run by government and academic institutions and could leverage that data for better stratifying participants and results. However, if the goal of government and philanthropic research efforts is to improve outcomes, then they should open the aperture on available funds to stand up a persistent cohort that can be used by many researchers rather than continuing the one-off paradigm, which in the end is far more expensive and inefficient. Finally, we do not imply that all intervention types should be run through megastudies. They are an essential, albeit underutilized tool in the arsenal, but not a silver bullet for testing behavioral interventions.
Fear of Unauthorized Data Access or Misuse
There is substantial risk when bringing together such deep personal data on a large population of people. While companies compile deep data all the time, it is unusual to do so for research purposes and will, for sure, raise some eyebrows, as has been the case for large studies like the aforementioned All of Us and the Million Veteran’s Program.
Patients fear misuse of their data, inaccurate recommendations, and biased algorithms—especially among historically marginalized populations. Patients must trust that their data is being used for good, not for marketing purposes and determining their insurance rates.
Icons © 2024 by Jae Deasigner is licensed under CC BY 4.0
Need for Data Interoperability
Many healthcare and community systems operate in data silos and data integration is a perennial challenge in healthcare. Patient-generated data from wearables, apps, or remote sensors often do not integrate with electronic health record data or demographic data gathered from elsewhere, limiting the precision and personalization of behavior-change interventions. This lack of interoperability undermines both provider engagement and user benefit. Data fragmentation and poor usability requires designing cloud-based data connectors and integration, creating shared feedback dashboards linking self-generated data to provider workflows, and creating and promoting policies that move towards interoperability. In short, given the constantly evolving data integration challenge and lack of real standards for data formats and integration requirements, a dedicated and persistent effort will have to be made to ensure that data can be seamlessly integrated if we are to draw value from combining data from many sources for each patient.
Additional Barriers
One of the largest barriers to using behavioral economics is that some rural, tribal, low-income, and older adults face access barriers. These could include device affordability, broadband coverage, and other usability and digital literacy limitations. Megastudies are not generally designed to bridge this gap leaving a significant limitation of applicability for these populations. Complicating matters, these populations also happen to have significant and specific health challenges unique to their cohorts. As the use of behavioral economic levers are developed, these communities are in danger of being left behind, further exacerbating health disparities. Nonetheless, insight into how to reach these populations can be gained for individuals in these populations that do have access to technology platforms. Communications will have to be tailored accordingly.
External motivators have been consistently shown to be the essential drivers of behavioral change. But motivation to sustain a behavior change and continue using technology often wanes over time. Embedding intrinsic-value rewards and workplace incentives may not be enough. Therefore, external motivations likely have to be adjusted over time in a dynamic system to ensure that adjustments to the behavior of the individual can be rooted in evidence. Indeed, study of the dynamic nature of driving behavioral change will be necessary due to the likelihood of waning influence of static messaging. By designing reward systems that tie personal values and workplace wellness programs sustained engagement through social incentives and tailored nudges may keep users engaged.
Plan of Action
By enabling a private sector entity to create a research platform using patient data combined with deep demographic data, and an ethical framework for access and use, we can create a platform for megastudies. This would allow the rapid testing of behavioral interventions that steer people towards healthier lifestyles, saving money, accelerating progress, and better understanding what works, in whom, and when for changing human behavior.
This could have been done through either the All of Us program or Million Veterans program or a different large cohort study, but neither program has the deep demographic and lifestyle data required to stratify their population. Both are mired in bureaucratic lethargy that is common in large scale government programs. Health insurance companies and third-party administrators of health insurance can gather such data, be nimbler, create a platform for communicating directly with patients, coordinate with their clinical providers. But one could argue that neither entity has a real incentive to bend behavior and encourage healthy lifestyles. Simply put, that is not their business.
Recommendation 1. Issue a directive to agencies to invest in the development of a megastudy platform for health behavioral economics studies.
The White House of HHS Secretary should direct the NIH or ARPA-H to develop a plan for funding the creation of a behavioral economics megastudy platform. The directive should include details on the ethical and technical framework requirements as well as directions for development of oversight of the platform once it is created. The platform should be required to establish a sustainability plan as part of the application for a contract to create the megastudy platform.
Recommendation 2. Government should fund the establishment of a megastudy platform.
ARPA-H and/or DARPA should develop a program to establish a broad research platform in the private sector that will allow for megastudies to be conducted. Then research teams can, in parallel, test dozens of behavioral interventions on populations and access patient data. This platform should have required ethical rules and be grounded in data sovereignty that allows patients to opt out of participation and having their data shared.
Data sovereignty is one solution to the trust challenge. Simply put, data sovereignty means that patients have access to the data on themselves (without having to pay a fee that physicians’ offices now routinely charge for access) and control over who sees and keeps that data. So, if at any time, a participant changes their mind, they can get their data and force anyone in possession of that data to delete it (with notable exceptions, like their healthcare providers). Patients would have ultimate control of their data in a ‘trust-less’ way that they never need to surrender, going well past the rather weak privacy provisions of HIPAA, so there is no question that they are in charge.
We suggest that using blockchain and token systems for data transfer would certainly be appropriate. Data held in a federated network to limit the danger of a breach would also be appropriate.
Recommendation 3. The NIH should fund behavioral economics megastudies using the platform.
Once the megastudy platform(s) are established, the NIH should make dedicated funds available for researchers to test for behavioral interventions using the platform to decrease costs, increase study longevity, and improve speed and efficiency for behavioral economics studies on behavioral health interventions.
Conclusion
Randomized controlled trials have been the gold standard for behavioral research but are not well suited for health behavioral interventions on a broad and diverse population because of the required number of participants, typical narrow population, recruiting challenges, and cost. Yet, there is an urgent need to encourage and incentivize d health related behaviors to make Americans healthier. Simply put, we cannot start to grow healthspan and lifespan unless we change behaviors towards healthier choices and habits. When the U.S. government funds the establishment of a platform for testing hundreds of behavioral interventions on a large diverse population, we will start to better understand the interventions that will have an efficient and lasting impact on health behavior. Doing so requires private sector cooperation and strict ethical rules to ensure public trust.
This memo produced as part of Strengthening Pathways to Disease Prevention and Improved Health Outcomes.
Making Healthcare AI Human-Centered through the Requirement of Clinician Input
Through partnership with the Doris Duke Foundation, FAS is advancing a vision for healthcare innovation that centers safety, equity, and effectiveness in artificial intelligence. Informed by the NYU Langone Health symposium on transforming health systems into learning health systems, FAS seeks to ensure that AI tools are developed, deployed, and evaluated in ways that reflect real-world clinical practice. FAS is leveraging its role in policy entrepreneurship to promote responsible innovation by engaging with key actors in government, research, and software development. These recommendations align with emerging efforts across health systems to integrate human-centered AI and evidence-based decision-making into digital transformation. By shaping AI grant requirements and post-market evaluation standards, these ideas aim to accelerate safe, equitable implementation while supporting ongoing learning and improvement.
The United States must ensure AI improves healthcare while safeguarding patient safety and clinical expertise. There are three priority needs:
- Embedding clinician involvement in the development and testing of AI tools
- Using representative data and promoting human-centered design
- Maintaining continuous oversight through post-market evaluation and outcomes-based contracting
This memo examines the challenges and opportunities related to integrating AI tools into healthcare. It emphasizes how human-centered design must ensure these technologies are tailored to real-world clinical environments. As AI adoption grows in healthcare, it is essential that clinician feedback is embedded into the federal grant requirements for AI development to ensure these systems are effective and aligned with real-world needs. Embedding clinician feedback into grant requirements for healthcare AI development and ensuring the use of representative data will assist with promoting safety, accuracy, and equity in healthcare tools. In addition, regular updates to these tools based on evolving clinical practices and patient populations must be part of the development lifecycle to maintain long-term reliability. Continuous post-market surveillance is necessary to ensure these tools remain both accurate and equitable. By taking these steps, healthcare systems can harness the full potential of AI while safeguarding patient safety and clinician expertise. Federal agencies such as the Office of the National Coordinator for Health Information Technology (ONC), the Food and Drug Administration (FDA) can incentivize clinician involvement through outcomes-based contracting approaches that link funding to measurable improvements in patient care. This strategy ensures that grant recipients embed clinician expertise at key stages of development and testing, ultimately aligning incentives with real-world health outcomes.
Challenge and Opportunity
The use of AI tools such as predictive triage classifiers and large language models (LLMs) have the potential to improve care delivery. However, there are significant challenges in integrating these tools effectively into daily clinical workflows without meaningful clinician involvement. As just one example, AI tools used in chronic illness triage can be particularly useful in helping to prioritize patients based on the severity of their condition, which can lead to timely care delivery. However, without direct involvement from clinicians in validating, interpreting, and guiding AI recommendations, these tools can suffer from poor usability and limited real-world effectiveness. Even highly accurate tools can become irrelevant if they are not adopted and clinicians do not engage with them, thereby reducing the positive impact they can have on patient outcomes.
Mysterious Inner Workings
The mysterious box of AI has fueled skepticism among healthcare providers and undermined trust among patients. Moreover, when AI systems lack clear and interpretable explanations, clinicians are more likely to avoid or distrust them. This response is attributed to what’s known as algorithm aversion. Algorithm aversion occurs when clinicians lose trust in a tool after seeing it make errors, making future use less likely, even if the tool is usually accurate. Designing AI with human-centered principles, particularly offering clinicians a role where they can validate, interpret, and guide AI recommendations, will help build trust and ensure decisions remain grounded in clinical expertise. A key approach to increasing trust and usability would be institutionalizing clinician engagement in the early stages of the development process. By involving clinicians during the development and testing phases, AI developers can ensure the tools fit seamlessly into clinical workflows. This will also help to mitigate concerns about the tool’s real-world effectiveness, as clinicians will be more likely to adopt tools they feel confident in. Without this collaborative approach, AI tools risk being sidelined or misused, preventing health systems from becoming genuinely adaptive and learning oriented.
Lack of Interoperability
A significant challenge in deploying AI tools across healthcare systems is the issue of interoperability. Most patients receive care across multiple providers and healthcare settings, making it essential for AI tools to be able to seamlessly integrate with electronic health records (EHR) and other clinical systems. Not having this integration could lead to tools losing their clinical relevance, effectiveness, and ability to be adopted on a larger scale. This lack of connectivity can lead to inefficiencies, duplicate testing, and other harmful errors. One way to address this is through a contracting process called Outcomes-based contracting (OBC), discussed shortly.
Trust in AI and Skill Erosion
Beyond trust and usability, there are broader risks associated with sidelining clinicians during AI integration. The use of AI tools without clinician input also presents the risk of clinician deskilling. Deskilling refers to the occurrence where clinicians’ skills and decision-making abilities erode over time due to their reliance on AI tools. This skill erosion leads to a decline in the judgement in situations where AI may not be readily available or suitable. Recent evidence from the ACCEPT trial shows that endoscopists’ performance dropped in non-AI settings after months of AI-assisted procedures. This presents a troubling phenomenon that we should aimt to prevent. AI-induced skill erosion also raises ethical concerns, particularly in complex environments where over-reliance on AI could erode clinical judgement and autonomy. If clinicians become too dependent on automated outputs, their ability to make critical decisions may be compromised, potentially impacting patient safety.
Embedded Biases
In addition to the erosion of human skills, AI systems also risk embedding biases if trained on unrepresentative data, leading to unfair or inaccurate outcomes across different patient groups. AI tools may present errors that appear plausible, such as generating nonexistent terms, which pose serious safety concerns, especially when clinicians don’t catch those mistakes. A systematic review of AI tools found that 22% of studies involved clinicians throughout the development phase. This lack of early clinician involvement has contributed to usability and integration issues across AI healthcare tools.
All of these issues underscore how critical clinician involvement is in the development of AI tools to ensure they are usable, effective, and safe. Clinician involvement should include defining relevant clinical tasks, evaluating interpretability of the system, validating performance across diverse patient groups, and setting standards for handoff between AI and clinician decision-making. Therefore, funding agencies should require AI developers to incorporate representative data and meaningful clinician involvement in order to mitigate these risks. Recognizing these challenges, it’s crucial to understand that implementing and maintaining AI requires continual human oversight and substantial infrastructure. Many health systems find this infrastructure too resource-intensive to properly sustain. Given the complexity of these challenges, without adequate governance, transparency, clinician training, and ethical safeguards, AI may hinder rather than help the transition to an enhanced learning health system.
Outcome-based Models (OBM)
To ensure that AI tools deliver properly, the federal contracting process should reinforce clinical involvement through measurable incentives. Outcomes-based contracting (OBC), a model where payments or grants are tied to demonstrated improvements in patient outcomes, can be a powerful tool. This model is not only a financing mechanism, but serves as a lever to institutionalize clinician engagement. By tying funding to real-world clinical impact, this compels developers to design tools that clinicians will use and find value in, ultimately increasing usability, trust, and adoption. This model provides a clear reward for impact rather than just for building tools or producing novel methods.
Leveraging outcomes-based models could also help institutionalize clinician engagement in the funding lifecycle. This would ensure developers demonstrate explicit plans for clinician participation through staff integration or formal consultation as a prerequisite for funding. Although AI tools may be safe and effective at the initial onset of their use, performance can change over time due to various patient populations, changes in clinical practice, and updates to software. This is known as model degradation. Therefore, a crucial component of using these AI tools needs to be regular surveillance to ensure the tools remain accurate, responsive to real-world use with clinicians and patients, and equitable. However, while clinician involvement is essential, it is important to acknowledge that including clinicians in all stages of the AI tool development, testing, deployment, and evaluation may not be realistic given the significant time cost for clinicians, their competing clinical responsibilities, and their limited familiarity with AI technology. Despite these factors, there are ways to engage clinicians effectively at key decision points during the AI development and testing process without requiring their presence at every stage.
Urgency and Federal Momentum
Major challenges associated with integrating AI into clinical workflows due to poor usability, algorithm aversion, clinician skepticism, and the potential for embedded biases in these tools highlight a need for thoughtful deployment of these tools. These challenges have presented a sense of urgency in light of recent healthcare shifts, particularly with the rapid acceleration of AI adoption after the COVID-19 pandemic. This drove breakthroughs in the areas of telemedicine, diagnostics, and pharmaceutical innovation that simply weren’t possible before. However, with the rapid pace of integration also comes the risk of unregulated deployment, potentially embedding safety vulnerabilities. Federal momentum supports this growth, with directives placing emphasis on AI safety, transparency, and responsible deployment, including the authorization of over 1,200 AI powered medical devices, primarily used in radiology, cardiology, and pathology, which tend to be areas that are complex in nature. However, without clinician involvement and the use of representative data for training, algorithms for devices such as the ones mentioned may remain biased and fail to integrate smoothly into care delivery. This disconnect could delay adoption, reduce clinical impact, and increase the risk of patient harm. Therefore, it’s imperative we set standards, embed clinician expertise in AI design, and ensure safe, effective deployment for the specific use of care delivery.
Furthermore, this moment of federal momentum aligns with broader policy shifts. As highlighted by a recent CMS announcement, the White House and national health agencies are working with technology leaders to create a patient-centric healthcare ecosystem. This includes a push for interoperability, clinical collaboration, and outcomes-driven innovation, all of which bolster the case for clinician engagement being woven into the very fabric of AI development. AI can potentially improve patient outcomes dramatically, as well as increase cost-efficiency in healthcare. Yet, without structured safeguards, these tools may deepen existing health inequities. However, with proper input from clinicians, these tools can reduce diagnostic errors, improve accuracy in high-stakes cases such as cancer detection, and streamline workflows, ultimately saving lives and reducing unnecessary costs.
As AI systems become further embedded into clinical practice, they will help to shape standards of care, influencing clinical guidelines and decision-making pathways. Furthermore, interoperability is essential when using these tools because most patients receive care from multiple providers across systems. Therefore, AI tools must be designed to communicate and integrate data from various sources, including electronic health records (EHR), lab databases, imaging systems, and more. Enabling shared access can enhance the coordination of care and reduce redundant testing or conflicting diagnoses. To ensure this functionality, clinicians must help design AI tools that account for real-world care delivery across what is currently a fragmented system.
Reshaping Healthcare AI
These challenges and risks culminate in a moment of opportunity where we can reshape and revolutionize the way AI supports healthcare delivery to ensure that its design is trustworthy and focused on outcomes. To fully realize this opportunity, clinicians must be embedded into various stages of AI development technology to improve its safety, usability, and adoption in healthcare settings. While some developers do involve clinicians during development, this practice is not the standard. Bridging this gap requires targeted action to ensure clinical expertise is consistently incorporated from the start. One way to achieve this is through federal agencies requiring AI developers to integrate representative data and clinician feedback into their AI tools as a condition of funding eligibility. This approach would improve the usability of the tool and enhance its contextual relevance to diverse patient populations and practice environments. Further, it would address current shortcomings as evidence has shown that some AI tools are poorly integrated into clinical workflows, which not only reduces their impact, but also undermines broader adoption and clinician confidence in the systems. Moreover, creating a clinician feedback loop for these systems will reduce the clerical burden that many clinicians experience and allow them to spend more dedicated time with their patients. Through the incorporation of human-centered design, we can mitigate issues that would normally arise by using clinician expertise during the development and testing process. This approach would build trust amongst clinicians and improve patient safety, which is most important when aiming to reduce errors and misinterpretations of diagnoses. With strong requirements and funding standards in place as safeguards, AI can transform health systems into adaptable learning environments that produce evidence and deliver equitable and higher quality care. This is a pivotal opportunity to showcase how innovation can support human expertise and strengthen trust in healthcare.
AI has the potential to dramatically improve patient outcomes and healthcare cost-efficiency, particularly in high-stakes diagnostic and treatment decisions like oncology, and critical care. In these areas, AI can analyze imaging, lab, and genomic data to uncover patterns that may not be immediately apparent to clinicians. For example, AI tools have shown promise in improving diagnostic accuracy in cancer detection and reducing the time clinicians spend on tasks like charting, allowing for more face-to-face time with patients.
However, these tools must be designed with clinician input at key stages, especially for higher-risk conditions, or tools may be prone to errors or fail to integrate into clinical workflows. By embedding outcome-based contracting (OBC) into federal funding and aligning financial incentives with clinical effectiveness, we are encouraging the development and use of AI tools that have the ability to improve patient outcomes and strengthen the healthcare system’s shift toward value-based care. This supports a broader shift toward value-based care where outcomes, not just outputs, define success.
The connection between OBC and clinician involvement is straightforward. When clinicians are involved in the design and testing of AI tools, these tools are more likely to be effective in real-world settings, thereby improving outcomes and justifying the financial incentives tied to OBC. AI tools can provide significant value for healthcare use in high-stakes, diagnostic and treatment decisions (oncology, cardiology, and critical care) where errors have large consequences on patient outcomes. In those settings, AI can assist by analyzing imaging, lab, and genomic data to uncover patterns that may not be immediately apparent to clinicians. However, these tools should not function autonomously, and input from clinicians is critical to validate AI outputs, specifically for issues where mortality or morbidity is high. In contrast, for lower-risk or routine care of common colds or minor dermatologic conditions, AI may be useful as a time-saving tool that does not require the same depth of clinician oversight.
Plan of Action
These actionable recommendations aim to help federal agencies and health systems embed clinician involvement, representative data, and continuous oversight into the lifecycle of healthcare AI.
Recommendation 1. Federal Agencies Should Require Clinician Involvement in the Development and Testing of AI Tools used in Clinical Settings.
Federal agencies should require clinician involvement in all aspects of the development and testing of AI healthcare tools. This mechanism could be enforced through a combination of agency guidance and tying funding eligibility to specific roles and checkpoints for clinicians. Specifically, agencies like the Office of the National Coordinator for Health Information Technology (ONC), the Food and Drug Administration (FDA) can issue guidance mandating clinician participation, and can tie AI tool development funding to the inclusion of clinicians in the design and testing phases. Guidance can mandate clinician involvement at critical stages for: (1) defining clinical tasks and user interface requirements (2) validating interpretability and performance for diverse populations (3) piloting in real workflows and (4) reviewing for safety and bias metrics. This would ensure AI tools used in clinical settings are human-centered, effective, and safe.
Key stakeholders who may wish to be consulted in this process include offices underneath the Department of Health and Human Services (HHS) such as the Office of the National Coordinator for Health Information Technology (ONC), the Food and Drug Administration (FDA), and the Agency for Healthcare Research and Quality (AHRQ). ONC and FDA should work to issue guidance encouraging clinician engagement during the premarket review. This would allow experts thorough review of scientific data and real-world evidence to ensure that the tools used are human-centered and have the ability to improve the quality of care.
Recommendation 2. Incentivize Clinician Involvement Through Outcomes-Based Contracting
Federal agencies such as the Department of Health and Human Services (HHS), the Centers for Medicare and Medicaid Services (CMS), and the Agency for Healthcare Research and Quality (AHRQ) should incorporate outcomes-based contracting requirements into AI-related healthcare grant programs. Funding should be awarded to grantees who: (1) include clinicians as part of their AI design teams or advisory boards, (2) develop formal clinician feedback loops, and (3) demonstrate measurable outcomes such as improved diagnostic accuracy or workflow efficiency. These outcomes are essential when thinking about clinician engagement and how it will improve the usability of AI tools and their clinical impact.
Key stakeholders include HHS, CMS, ONC, AHRQ, as well as clinicians, AI developers, and potentially patient advocacy organizations. These requirements should prioritize funding for entities that demonstrate clear clinician involvement at key development and testing phases, with metrics tied to improvements in patient outcomes and clinician satisfaction. This model would align with CMS’s ongoing efforts to foster a patient-centered, data-driven healthcare ecosystem that uses tools designed with clinical needs in mind, as recently emphasized during the health tech ecosystem initiative meeting. Embedding outcomes-based contracting into the federal grant process will link funding to clinical effectiveness and incentivize developers to work alongside clinicians through the lifecycle of their AI tools.
Recommendation 3. Develop Standards for AI Interoperability
ONC should develop interoperability guidelines that enable AI systems to share information across platforms while simultaneously protecting patient privacy. As the challenge of healthcare data fragmentation has become evident, AI tools must seamlessly integrate with diverse electronic healthcare records (EHRs) and other clinical platforms to ensure their effectiveness.
An example of successful interoperability frameworks can be seen through the Trusted Exchange Framework and Common Agreement (TEFCA), which aims to establish a nationwide interoperability infrastructure for the exchange of health information. Using a model such as this one can establish seamless integration across different healthcare settings and EHR systems, ultimately promoting efficient and accurate patient care. This effort would involve the consultation of clinicians, electronic health record vendors, patients, and AI developers. These guidelines will help ensure that AI tools can be used safely and effectively across clinical settings.
Recommendation 4. Establish Post-Market Surveillance and Evaluation of Healthcare AI Tools to Enhance Performance and Reliability
Federal agencies such as FDA and AHRQ should establish frameworks that can be used for the continuous monitoring of AI tools in clinical settings. These frameworks for privacy-protected data collection should incorporate feedback loops that allow real-world data from clinicians and patients to inform ongoing updates and improvements to the systems. This ensures the effectiveness and accuracy of the tools over time. Special emphasis should be placed on bias audits that can detect disparities in the system’s performance across different patient groups. Bias audits will be key to identifying whether AI tools inadvertently present disadvantages to specific populations based on the data they were trained on. Agencies should require that these audits be conducted routinely as part of the post-market surveillance process. The surveillance data collected can be used for future development cycles where AI tools are updated or re-trained to address shortcomings.
Evaluation methods should track clinician satisfaction, error rates, diagnostic accuracy, and reportability of failures. During this ongoing evaluation process, incorporating routine bias audits into post-market surveillance will ensure that these tools remain equitable and effective over time. Funding for this initiative could potentially be provided through a zero-cost, fee-based structure or federally appropriated grants. Key stakeholders in this process could include clinicians, AI developers, and patients, all of whom would be responsible for providing oversight.
Conclusion
Integrating AI tools into healthcare has an immense amount of potential to improve patient outcomes, streamline clinical workflows, and reduce errors and bias. However, without clinician involvement in the development and testing of these tools, we risk continual system degradation and patient harm. Requiring that all AI systems used for healthcare are human-centered through clinician input will ensure these systems are effective, safe, and align with real-world clinical needs. This human-centered approach is critical not only for usability, but also for building trust among clinicians and patients, fostering the adoption of AI tools, and ensuring they function properly in real-world clinical settings.
In addition, aligning funding and clinical outcomes through outcomes-based contracting adds a mechanism that forces accountability and ensures lasting impact. When developers are rewarded for improving safety, usability, and equity through clinician involvement, we can transform AI tools into safer care. There is an urgency to address these challenges due to the rapid adoption of AI tools which will require safeguards and ethical oversight. By embedding these recommendations into funding opportunities, we will move America toward building trustworthy healthcare systems that enhance patient safety, clinician expertise, and are adaptive while maximizing AI’s potential for improving patient outcomes. Clinician engagement, both in the development process and through ongoing feedback loops will be the foundation of this transformation. With the right structures in place, we can ensure AI becomes a trusted partner in healthcare and not a risk to it.
This memo produced as part of Strengthening Pathways to Disease Prevention and Improved Health Outcomes.
A National Blueprint for Whole Health Transformation
Despite spending over 17% of GDP on health care, Americans live shorter and less healthy lives than their peers in other high-income countries. Rising chronic disease and mental health challenges as well as clinician burnout expose the limits of a system built to treat illness rather than create health. Addressing chronic disease while controlling healthcare costs is a bipartisan goal, the question now is how to achieve this shared goal? A policy window is opening now as Congress debates health care again – and in our view, it’s time for a “whole health” upgrade.
Whole Health is a proven, evidence-based framework that integrates medical care, behavioral health, public health, and community support so that people can live healthier, longer, and more meaningful lives. Pioneered by the Veterans Health Administration, Whole Health offers a redesign to U.S. health and social systems: it organizes how health is created and supported across sectors, shifting power and responsibility from institutions to people and communities. It begins with what matters most to people–their purpose, aspirations, and connections–and aligns prevention, clinical care, and social supports accordingly. Treating Whole Health as a shared public priority would help ensure that every community has the conditions to thrive.
Challenge and Opportunity
The U.S. health system spends over $4 trillion annually, more per capita than any other nation, yet underperforms on life expectancy, infant mortality, and chronic disease management. The prevailing fee-for-service model fragments care across medical, behavioral, and social domains, rewarding treatment over prevention. This fragmentation drives costs upward, fuels clinician burnout, and leaves many communities without coordinated support.
At this inflection point in our declining health outcomes and growing public awareness of the failures of our health system, federal prevention and public health programs are under review, governors are seeking cost-effective chronic disease solutions, and the National Academies is advocating for new healthcare models. Additionally, public demand for evidence-based well-being is growing, with 65% of Americans prioritizing mental and social health. There is clear demand for transformation in our health care system to deliver results in a much more efficient and cost effective way.
Veterans Health Administration’s Whole Health System Debuted in 2011
Whole Health offers a system-wide redesign for the challenge at hand. As defined by the National Academies of Sciences, Engineering, and Medicine, Whole Health is a framework for organizing how health is created and supported across sectors. It integrates medical care, behavioral health, public health, and community resources. As shown in Figure 1, the framework connects five system principles—People-Centered, Upstream-Focused, Equitable & Accountable, Comprehensive & Holistic, and Team Well-Being–that guide implementation across health and social support systems. The nation’s largest health system, the Veterans Health Administration’s (VHA), has demonstrated this framework in clinical practice through their Whole Health System since 2011. The VHA’s Whole Health System operates through three core functions: Empower (helping individuals define purpose), Equip (providing community resources like peer support), and Clinical Care (delivering coordinated, team-based care). Together, these elements align with what matters most to people, shifting the locus of control from expert-driven systems to shared agency through partnerships. The Whole Health System at the VHA has reduced opioid use and improved chronic disease outcomes.
Successful State Examples
Beyond the VHA, states have also demonstrated the possibility and benefits of Whole Health models. North Carolina’s Healthy Opportunities Pilots extended Medicaid coverage to housing, food, and transportation, showing fewer emergency visits and savings of about $85 per member per month. Vermont’s Blueprint for Health links primary care practices with community health teams and social services, reducing expenditures by about $480 per person annually and boosting preventive screenings. Finally, the Program of All-Inclusive Care for the Elderly (PACE), currently being implemented in 33 states, utilizes both Medicare and Medicaid funding to coordinate medical and social care for older adults with complex medical needs. While improvements can be made to national program-wide evaluation, states like Kansas have done evaluations that have found that the PACE program is less expensive than nursing homes per beneficiary and that nursing home admissions decline by 5% to 15% for beneficiaries.
Success across each of these examples relies on three pillars: (1) integrating medical, behavioral, social, and public health resources; (2) sustainable financing that prioritizes prevention and coordination; and (3) rigorous evaluation of outcomes that matter to people and communities. While these programs are early signs of success of Whole Health models, without coordinated leadership, efforts will fragment into isolated pilots and it will be challenging to learn and evolve.
A policy window for rethinking the health care system is opening. At this national inflection point, the U.S. can work to build a unified Whole Health strategy that enables a more effective, affordable and resilient health system.
Plan of Action
To act on this opportunity, federal and state leaders can take the following coordinated actions to embed Whole Health as a unifying framework across health, social, and wellbeing systems.
Recommendation 1. Declare Whole Health a Federal and State Priority.
Whole Health should become a unifying value across federal and state government action on health and wellbeing, embedding prevention, connection, and integration into how health and social systems are organized, financed, and delivered. Actions include:
- Federal Executive Action. The Executive Office of the President should create a Whole Health Strategic Council that brings together Veterans Affairs (VA), Health and Human Services (e.g. Centers for Disease Control and Prevention, Centers for Medicare and Medicaid (CMS), and Health Resources and Services Administration (HRSA)), Housing and Urban Development (HUD), and the U.S. Department of Agriculture (USDA) to align strategies, budgets, and programs with Whole Health principles through cross-agency guidance and joint planning. This council should also work with Governors to establish evidence-based benchmarks for Whole Health operations and evaluation (e.g., person-centered planning, peer support, team integration) and shared outcome metrics for well-being and population health.
- U.S. Congressional Action. Authorize whole health benefits, like housing assistance, nutrition counseling, transportation to appointments, peer support programs, and well-being centers as reimbursable services under Medicare, Medicaid and the Affordable Care Act health subsidies.
- State Action. Adopt Whole Health models through Medicaid managed-care contracts and through CDC and HRSA grant implementation. States should also develop support for Whole Health services in trusted local settings such as libraries, faith-based organizations, senior centers, to reach people where they live and gather.
Recommendation 2. Realign Financing and Payment to Reward Prevention and Team-Based Care.
Federal payment modalities need to shift from a fee-for-service model toward hybrid value-based models. Models such as per-member-per-month payments with quality incentives, can sustain comprehensive, team-based care while delivering outcomes that matter, like reductions in chronic disease and overall perceived wellbeing. Actions include:
- Federal Executive Action. Expand Advanced Primary Care Management (APCM) payments to include Whole Health teams, including clinicians, peer coaches, and community health workers. Ensure that this funding supports coordination, person-centered planning, and upstream prevention, such as food as medicine programs. Further, CMS can expand reimbursements to community health workers and peer support roles and standardize their scope-of-practice rules across states.
- U.S. Congressional Action. Invest in Medicare and Medicaid innovation programs, such as the CMS Innovation Center (CMMI), that reward prevention and chronic disease reduction. Additionally, expand tools for payment flexibility, through Medicaid waivers and state innovation funds, to help states adapt Whole Health models to local needs.
- State Action. Require Medicaid managed-care contracts to reimburse Whole Health services, particularly in underserved and rural areas, and encourage payers to align benefit designs and performance measures around well-being. States should also leverage their state insurance departments to guide and incentivize private health insurers to adopt Whole Health payment models.
Recommendation 3. Strengthen and Expand the Whole Health Workforce.
Whole Health practice needs a broad team to be successful: clinicians, community health workers, peer coaches, community organizations, nutritionists, and educators. To build this workforce, governments need to modernize training, assess the workforce and workplace quality, and connect the fast-growing well-being sector with health and community systems. Actions include:
- Federal Executive Action. Through VA and HRSA establish Whole Health Workforce Centers of Excellence to develop national curricula, set standards, and disseminate evidence on effective Whole Health team-building. Further, CMS should track workforce outcomes such as retention, burnout, and team integration, and evaluate the benefits for health professionals working in Whole Health systems versus traditional health systems.
- U.S. Congressional Action. Expand CMS Graduate Medical Education Funds and HRSA workforce programs to support Whole Health training, certifications, and placements across clinical and community settings.
- State Action. As a part of initiatives to grow the health workforce, state governments should expand the definition of a “health professional” to include Whole Health practitioners. Further, states can leverage their role as a licensure for professionals by creating a “whole health” licensing process that recognizes professionals that meet evidence-based standards for Whole Health.
Recommendation 4. Build a National Learning and Research Infrastructure.
Whole Health programs across the country are proving effective, but lessons remain siloed. A coordinated national system should link evidence, evaluation, and implementation so that successful models can scale quickly and sustainably.
- Federal Executive Action. Direct the Agency for Healthcare Research and Quality, National Institutes of Health, and partner agencies (VA, HUD, USDA) to run pragmatic trials and cost-effectiveness studies of Whole Health interventions that measure well-being across clinical, biomedical, behavioral, and social domains. The federal government should also embed Whole Health frameworks into government-wide research agendas to sustain a culture of evidence-based improvement.
- U.S. Congressional Action. Charter a quasi-governmental entity, modeled on Patient-Centered Outcomes Research Institute (PCORI), to coordinate Whole Health demonstration sites and research. This new entity should partner with CMMI, HRSA and VA to test Whole Health payment and delivery models under real-world conditions. This entity should also establish an interagency team as well as state network to address payment, regulatory, and privacy barriers identified by sites and pilots.
- State Action. Partner with federal agencies through innovation waivers (e.g. 1115 waivers and 1332 waivers) and learning collaboratives to test Whole Health models and share data across state systems and with the federal government.
Conclusion
The United States spends more on health care than any other nation yet delivers poorer outcomes. Whole Health offers a proven path to reverse this trend, reframing care around prevention, purpose, and integration across health and social systems. Embedding Whole Health as the operating system for America’s health requires three shifts: (1) redefining the purpose from treating disease to optimizing health and well-being; (2) restructuring care to empower, equip, and treat through team-based and community-linked approaches; and (3) rebalancing control from expert-driven systems to partnerships guided by what matters most to people and communities. Federal and state leaders have the opportunity to turn scattered Whole Health pilots to a coordinated national strategy. The cost of inaction is continued fragmentation; the reward of action is a healthier and more resilient nation.
This memo produced as part of Strengthening Pathways to Disease Prevention and Improved Health Outcomes.
Both approaches emphasize caring for people as integrated beings rather than as a collection of diseases, but they differ in scope and application. Whole Person Health, as used by NIH, focuses on the biological, psychological, and behavioral systems within an individual—it is primarily a research framework for understanding health across body systems. Whole Health is a systems framework that extends beyond the individual to include families, communities, and environments. It integrates medical care, behavioral health, public health, and social support around what matters most to each person. In short, Whole Person Health is about how the body and mind work together; Whole Health is about how health, social, and community systems work together to create the conditions for well-being. Policymakers can use Whole Health to guide financing, workforce, and infrastructure reforms that translate Whole Person Health science into everyday practice.
Integrative Health combines evidence-based conventional and complementary approaches such as mindfulness, acupuncture, yoga, and nutrition to support healing of the whole person. Whole Health extends further. It includes prevention, self-care, and personal agency, and moves beyond the clinic to connect medical care with social, behavioral, and community dimensions of health. Whole Health uses integrative approaches when evidence supports them, but it is ultimately a systems model that aligns health, social, and community supports around what matters most to people. For policymakers, it provides a structure for integrating clinical and community services within financing and workforce strategies.
They share a common foundation but differ in scope and audience. The VA Whole Health System, developed by the Department of Veterans Affairs, is an operational model, a way of delivering care that helps veterans identify what matters most, supports self-care and skill building, and provides team-based clinical treatment. The National Academies’ Whole Health framework builds on the VA’s experience and expands it to the national level. It is a policy and systems framework that applies Whole Health principles across all populations and connects health care with public health, behavioral health, and community systems. In short, the VA model shows how Whole Health works in practice, while the National Academies framework shows how it can guide national policy and system alignment.
Moving Federal Postsecondary Education Data to the States
Moving postsecondary education data collection to the states is the best way to ensure that the U.S. Department of Education can meet its legislative mandates in an era of constrained federal resources. Students, families, policymakers, and businesses need this data to make decisions about investments in education, but cuts to the federal government make it difficult to collect. The Commissioner of the National Center for Education Statistics should use their authority to establish state cooperative groups to collect and submit data from the postsecondary institutions in each state to the federal government, like the way that K12 schools report to the U.S. Department of Education (ED). With funding from the State Longitudinal Data System grant program and quality measures like the Common Education Data Standards, this new data reporting model will give more power to states, improve trust in education data, and make it easier for everyone to use the data.
Challenge and Opportunity
The Integrated Postsecondary Education Data System (IPEDS), was hit hard by staffing and contract cuts at the U.S. Department of Education in early 2025. Without the staff to collect and clean the data, or the contractors to run the websites and reports, this is the first time in its decades-long history that IPEDS may not be available to the public next year. IPEDS is a vast data collection, including information on grants and scholarships, tuition prices, graduation rates, and staffing levels. This has serious implications for students and their families choosing colleges, as well as for policymakers who want to ensure that these colleges graduate students on time, for businesses who want to find trained workers, and for everyone who cares about educating tomorrow’s citizens . Not to mention that these data are required by law under the Higher Education Act and the Civil Rights Act of 1964, among others.
Moving IPEDS data collection to the states is the best way to ensure that the data continue to be collected and released. States already play a large role in collecting data on elementary and secondary education, a model that could work for postsecondary data like IPEDS.
Why do we collect K12 data through states but not postsecondary data? K12 systems are substantially different from postsecondary data due to federal legislation. No Child Left Behind catalyzed the expansion of K12 data infrastructure, requiring regular reporting on student test scores, disaggregated achievement data for student groups, and information about teacher qualifications. Though the accountability measures attached to these data were controversial, the reporting processes they catalyzed vastly surpassed those in the postsecondary data system, which was built piecemeal over decades.
In K12 data systems, local education agencies report data to state education agencies who report to the National Center for Education Statistics (NCES). Reviews at each step in this process ensures that data are high quality and made available quickly for analysis. In postsecondary data, thousands of institutions individually report to NCES, which takes months to review and release the data for the whole country. Some institutions do report to a state coordinator, like Maryland which has one reporter for all public postsecondary institutions and one for all privates. The role of state coordinators varies widely across states. Using the state reporting model is an opportunity to further streamline this process.
Reporting postsecondary data at the state level has another benefit: it gives states control over future student-level data reporting. That is because states, in addition to fulfilling reporting requirements, also collect student-level data from K12 systems that can be linked to students’ postsecondary and workforce outcomes over time. These statewide longitudinal systems (SLDS) were supported through a federal grant program that began in 2006, and many of the measures collected are required for federal K12 reporting. Some SLDSs contain postsecondary measures like tuition and graduation rates, which are also collected by IPEDS. Though IPEDS does not require student-level data, advocates have been pushing for such data for several years. A student unit record system was proposed in the College Transparency Act. Moving IPEDS data collection to the states will help states develop the systems necessary to implement future student-level data collection in postsecondary education if this or similar legislation passes Congress.
Plan of Action
The NCES Commissioner should establish state-level groups to collect and submit IPEDS data. Instead of receiving thousands of individual reports from postsecondary institutions, NCES would receive 59, one from each state plus Washington D.C. and the territories that already report to IPEDS. For states that need support to manage this reporting process, ED could provide funding through an existing grant program. The IPEDS data definitions and reporting requirements would not change, but they could be improved through integration into other data standards.
This plan has several advantageous outcomes. First, IES would be able to meet its data collection and reporting requirements despite limited staff and funding. This increases efficiency and saves taxpayer dollars. Second, states would have access to their data more quickly, thus minimizing pressure on IES to release data on shorter timelines. This allows data users to work with more current information and give states the power to conduct their own analysis.
Step 1. The U.S. Department of Education can use its authority to establish state cooperatives to move IPEDS data collection to the states
Under 20 U.S.C. §9547, the NCES Commissioner has the authority to set up cooperatives to produce education statistics. These cooperatives could serve as the governing body and fiscal agent for collecting and submitting data from each state to the federal government. In states that already have a coordinator to submit IPEDS data, this cooperative group builds on existing processes for collecting, reviewing, and submitting data, including existing IPEDS state coordinators. The cooperatives should also involve state higher education executive officers, representatives from public, non-profit, and private postsecondary institutions, and experts in data systems and institutional research.
NCES should publish a charter that states can adopt as they organize their cooperatives. Multi-state education data groups, like the Multi-State Data Collaborative, have developed charters that could be used as a starting point. The sample charter should encourage the development of federated data systems, one model that has been successful in K12 data collection. Federated data systems, as opposed to centralized ones, operate on agreements to link and share data upon request, after which the linked data are destroyed. This model offers stronger protections for data privacy and can be established quickly.
Step 2. States should commit to financial support to support data submissions
States will also need financial support to develop or expand data storage systems, pay staff for quality reviews, and support data submissions. Some of this infrastructure already exists through funding from the IES SLDS grant program. Future grant awards could be used to fund the expansion of these systems to include IPEDS data collection and submission by setting priorities in the grant selection process.
In addition, NCES could contract with a technical assistance provider to support state infrastructure development. Something like the data academy offered by the State Higher Education Executive Officers Association (SHEEO) or the institutional research training offered by the Association for Institutional Research (AIR) would be useful to states that need personalized assistance.
Step 3. States should continue to use the data definitions and guidance developed by NCES
To ensure that the data retains the same high-quality standard of federal IPEDS collection, states should continue to use the data definitions and guidance developed by NCES. Further integrating these definitions with the Common Education Data System (CEDS), ensures that states understand and have access to these definitions. CEDS, the voluntary national standard for reporting K12 data, already includes some postsecondary data elements. Incorporating all IPEDS data definitions into CEDS will streamline data standards across K12 and postsecondary. CEDS also has recommendations for building out data infrastructure, like data stores (repositories for multiple databases and file types), helpful for states who need to expand theirs for this effort.
Conclusion
Moving IPEDS data collection to the states is the best way to ensure that NCES meets its legislative mandates in an era of constrained federal resources. This new collection method has other benefits as well. A more decentralized data collection will give more power to states to represent their unique institutions and contexts. By serving as stewards of this data, states will have better access to it, allowing for quicker reporting and analysis. With more access to and control over the data, trust and usage of the data will improve.
Unlike K12, there is more than one state-level education authority in many states. This will require more coordination among state higher education executive officers, state boards of higher education, and other state/regional actors such as accreditors. Private postsecondary institutions would also need to be at the table. The cooperative model provides a structure for bringing these entities together.
CTA includes a ban on using cooperatives to create a unit record data system, which may impact the use of this authority to create other collaborative systems. This ban is related to the larger debate over student unit record systems. Though IPEDS is not a unit record, it would still be helpful to review the language in CTA to ensure that the establishment of cooperatives would not be stymied by this provision in case CTA is passed.
IPEDS does not currently collect data at the student level. Because there is no individually identifiable data, privacy is not a greater concern under this proposal than it is under the current system for collecting data.
Investing in Young Children Strengthens America’s Global Leadership
Supporting the world’s youngest children is one of the smartest, most effective investments in U.S. strength and soft power. The cancellation of 83 percent of foreign assistance programs in early 2025, coupled with the dismantling of the U.S. Agency for International Development (USAID), not only caused unnecessary suffering of millions of young children in low-income countries, but also harmed U.S. security, economic competitiveness, and global leadership. As Congress crafts legislation to administer foreign assistance under a new America First focused State Department, it should recognize that renewed attention and support for young children in low-income countries will help meet stated U.S. foreign assistance priorities to make America safer, stronger, and more prosperous. Specifically, Congress should: (1) prioritize funding for programs that promote early childhood development; (2) bolster State Department staffing to administer resources efficiently; and (3) strengthen accountability and transparency of funding.
Challenge and Opportunity
Supporting children’s development through health, nutrition, education, and protection programs helps the U.S. achieve its national security and economic interests, including the Administration’s priorities to make America “safer, stronger, and more prosperous.” Investing in global education, for example, generates economic growth overseas, creating trade opportunities and markets for the U.S. In fact, 11 of America’s top 15 trading partners once received foreign aid. Healthy, educated populations are associated with less conflict and extremism, which reduces pressures on migration. Curbing the spread of infectious diseases like HIV/AIDS and Ebola makes Americans safer from disease both abroad and at home. As a diplomacy tool, providing support for early childhood development, which is a priority in many partner countries, increases U.S. goodwill and influence in these countries and contributes to its geopolitical competitiveness.
Helping young children thrive in low-income countries is a high-return investment in stable economies, skilled workforces, and a stronger America on the world stage. In a July 2025 press release, the State Department recognized how investing in children and families globally contributes to America’s national development and priorities:
Supporting children and families strengthens the foundation of any society. Investing in their protection and well-being is a proven strategy for ensuring American security, solidifying American strength, and increasing American prosperity. When children and families around the world thrive, nations flourish.
The first five years of a child’s life is a period of unprecedented brain development. Investments in early childhood programs – including parent coaching, child care, and quality preschool – yield large and long-term benefits for individuals and society-at-large, up to a 13% return on investment, particularly when these interventions are targeted to the most vulnerable and disadvantaged populations. Despite the promise of early childhood interventions, 43% of children under five in low- and middle-income countries are at elevated risk of poor development, leaving them vulnerable to the long-term negative impacts of adversity, such as poverty, malnutrition, illness, and exposure to violence. The costs of inaction are high; countries that underinvest in young children are more likely to have less healthy and educated populations and to struggle with higher unemployment and lower GDPs.
Informed by this powerful evidence, the bipartisan Global Child Thrive Act of 2020 required U.S. Government agencies to develop and implement policies to advance early childhood development – the cognitive, physical, social, and emotional development of children up to age 8 – in partner countries. This legislation supported early childhood development through nutrition, education, health, and water, sanitation, and hygiene interventions. It mandated the U.S. Government Special Advisor for Children in Adversity to lead a coordinated, comprehensive, and effective U.S. government response through international assistance. The bipartisan READ Act complements the Thrive Act by requiring the U.S. to implement an international strategy for basic education, starting with early childhood care and education.
Three examples of USAID-funded early childhood programs terminated in 2025 illustrate how investments in young children not only achieve multiple development and humanitarian goals, but also address U.S. priorities to make America safer, stronger, and more prosperous:
- Cambodia. Southeast Asia is of strategic importance to U.S. security given risks of China’s political and military influence in the region. The Integrated Early Childhood Development activity ($20 million) helped young children (ages 0-2) and their caregivers through improved nutrition, responsive caregiving, agricultural practices, better water, sanitation, and hygiene, and support for children with developmental delays or disabilities. Within a week of cancellation, China filled the USAID vacuum and gained a soft-power advantage by announcing funding for a program to achieve almost identical goals.
- Honduras. Foreign assistance mitigates poverty, instability, and climate shocks that push people to migrate from Central America (and other regions) to the U.S. The Early Childhood Education for Youth Employability activity ($8 million) aimed to improve access to quality early learning for more than 100,000 young children (ages 3-6) while improving the employability and economic security for 25,000 young mothers and fathers, a two-generation approach to address drivers of irregular migration.
- Ethiopia. The U.S. has a long-standing partnership with Ethiopia to increase stability and mitigate violent extremism in the Horn of Africa. Fostering peace and promoting security, in turn, expands markets for American businesses in the region. Through a public-private partnership with the LEGO Foundation, the Childhood Development Activity ($46 million) reached 100,000 children (ages 3-6+) in the first two years of the program with opportunities for play-based learning and psycho-social support for coping with negative effects of conflict and drought.
Drastic funding cuts have jeopardized the wellbeing of vulnerable children worldwide and the “soft power” the U.S. has built through relationships with more than 175 partner countries. In January 2025, the Trump Administration froze all foreign assistance and began to dismantle the USAID, the lead coordinating agency for children’s programs under the Global Child Thrive Act and READ Act. By March 2025, sweeping cuts ended most USAID programs focused on children’s education, health, water and sanitation, nutrition, infectious diseases (malaria, tuberculosis, neglected tropical diseases, and HIV/AIDS), and support for orphans and vulnerable children. In total, the U.S. eliminated around $4 billion in foreign assistance intended for children in the world’s poorest countries. As a result, an estimated 378,000 children have died from preventable illnesses, such as HIV, malaria, and malnutrition.
In July 2025, Congress voted to approve the Administration’s rescission package, which retracts nearly $8 billion of FY25 foreign assistance funding that was appropriated, but not yet spent. This includes support for 6.6 million orphans and vulnerable children (OVC) and $142 million in core funding to UNICEF, the UN agency which helps families in emergencies and vulnerable situations globally. An additional $5 billion of foreign assistance funding expired at the end of the fiscal year while being withheld through a pocket rescission.
As Congress works to reauthorize the State Department, and what remains of USAID, it should see that helping young children globally supports both American values and strategic interests.
Recent U.S. spending on international children’s programs accounted for only 0.09% of the total federal budget and only around 10% of foreign assistance expenditure. If Congress does not act, this small, but impactful funding is at risk of disappearing from the FY 2026 budget.
Plan of Action
For decades, the U.S. has been a leader in international development and humanitarian assistance. Helping the world’s youngest children reach their potential is one of the smartest, most effective investments the U.S. government can make. Congress needs to put in place funding, staffing, and accountability mechanisms that will not only support the successful implementation of the Global Child Thrive Act, but also meet U.S. foreign policy priorities.
Recommendation 1. Prioritize funding for early childhood development through the Department of State
In the FY26 budget currently under discussion, Congress has the responsibility to fund global child health, education, and nutrition programs under the authority of the State Department. These child-focused programs align with America’s diplomatic and economic interests and are vital to young children’s survival and well-being globally.
To promote early childhood development specifically, the Global Child Thrive Act should be reauthorized under the auspices of the State Department. While there is bipartisan support in the House Foreign Affairs Committee to extend authorization of the Global Child Thrive Act through 2027, the current bill had not made it to the House floor as of October 2025, and the Senate bill was delayed by a federal government shutdown.
Congress should pass legislation to appropriate $1.5 billion in FY26 funding for life-saving and life-changing programs for young children, including:
- The Vulnerable Children’s Account which funds multi-sectoral, evidence-based programs that support the objectives of the Global Child Thrive Act and the Advancing Protection and Care for Children in Adversity Strategy ($50 million).
- PEPFAR 10% Orphans and Vulnerable Children Set Aside which protects and promotes the holistic health and development of children affected by HIV/AIDS ($710 million).
- UNICEF core funding, given the agency’s track record in advancing early childhood development programs in development and humanitarian settings ($300 million).
- Commitments to government-philanthropy partnerships with pooled funds that prioritize the early years including the Global Partnership for Education, Education Cannot Wait, the Early Learning Partnership, and the Global Financing Facility ($430 million).
Funding should be written into legislation so that it is protected from future cuts.
Recommendation 2. Adequately staff the State Department to coordinate early childhood programs
The State Department needs to rebuild expertise on global child development that was lost when USAID collapsed. As a first step, current officials need to be briefed on relevant legislation including the Global Child Thrive Act and the READ Act. In response to the reduced capacity, Congress should fund a talent pipeline in order to attract a cadre of professionals within the State Department in Washington, DC and at U.S. Embassies who can focus on early years issues across sectors and funding streams. Foreign nationals who have a deep understanding of local contexts should be considered for these roles.
In the context of scarce resources, coordination and collaboration is more important than ever. The critical role of the USG Special Advisor for Children in Adversity should be formally transferred to the State Department to provide technical leadership and implementation support for children’s issues. Within the reorganized State Department, the Special Advisor should sit in the office of the Under Secretary for Foreign Assistance, Humanitarian Affairs and Religious Freedom (F), where s/he can serve as a leading voice for children and foster inter-agency coordination across the Departments of Agriculture and Labor, the Millennium Challenge Corporation, etc.
Congress also should seek clarification on how the new Special Envoy for Best Future Generations will contribute specifically to early childhood development. The State Department appointed the Special Envoy in June 2025 as a liaison for initiatives impacting the well-being of children under age 18 in the U.S. and globally. In the past three months, the Special Envoy has met with U.S. government officials at the White House and State Department, representatives from 14 countries at the U.N., and non-governmental organizations to discuss coordinated action on children’s issues, such as quality education, nutritious school meals, and ending child labor and trafficking.
Recommendation 3. Increase accountability and transparency for funds allocated for young children
Increased oversight over funds can improve efficiency, prevent delays, and reduce risks of funds expiring before they reach intended families. The required reporting on FY24 programs is overdue and should be submitted to Congress by the end of December 2025.
Going forward, Congress should require the State Department to report regularly and testify on how money is being spent on young children. Reporting should include evidence-based measures of Return on Investment (ROI) to help demonstrate the impact of early childhood programs. In addition, the Office of Foreign Assistance should issue a yearly report to Congress and to the public which tracks annual inter-agency progress toward implementing the Global Child Thrive Act using a set of indicators, including the approved pre-primary indicator and other relevant and feasible indicators across age groups, programs, and sectors.
Conclusion
Investing in young children’s growth and learning around the world strengthens economies, builds goodwill, and secures America’s position as a trusted global leader. To help reach U.S. foreign policy priorities, Congress must increase funding, staffing and accountability of the State Department’s efforts to promote early childhood development, while also strengthening multi-agency coordination and accountability for achieving results. The Global Child Thrive Act provides the legislative mandate and a technical roadmap for the U.S. Government to follow.
By investing only about 1% of the federal budget, USAID contributed to political stability, economic growth, and good will with partner countries. A new Lancet article estimates USAID funding saved 30 million children’s lives between 2001 and 2021 and was associated with a 32% reduction in under five deaths in low- and middle-income countries. In the past five years alone, funding supported the learning of 34 million children. USAID spending was heavily examined by the State Department, Congress, the Office of Management and Budget, and the Office of the Inspector General. Recent claims of waste, fraud, and abuse are inaccurate, exaggerated or taken out of context.
The public strongly supports many aspects of foreign assistance that benefit children. A recent Pew Research Study found that around 80% of Americans agreed that the U.S. should provide medicine and medical supplies, as well as food and clothing, to people in developing countries. In terms of political support, children’s programs are viewed favorably by lawmakers on both sides of the aisle. For example, the Global Child Thrive Act was introduced by Representatives Joaquin Castro (D-TX) and Brian Fitzpatrick (R-PA) and Senators Roy Blunt (R-MO) and Christopher Coons (D-DE) and passed with bipartisan support from Congress.
AI Implementation is Essential Education Infrastructure
State education agencies (SEAs) are poised to deploy federal funding for artificial intelligence tools in K–12 schools. Yet, the nation risks repeating familiar implementation failures that have limited educational technology for more than a decade. The July 2025 Dear Colleague Letter from the U.S. Department of Education (ED) establishes a clear foundation for responsible artificial intelligence (AI) use, and the next step is ensuring these investments translate into measurable learning gains. The challenge is not defining innovation—it is implementing it effectively. To strengthen federal–state alignment, upcoming AI initiatives should include three practical measures: readiness assessments before fund distribution, outcomes-based contracting tied to student progress, and tiered implementation support reflecting district capacity. Embedding these standards within federal guidance—while allowing states bounded flexibility to adapt—will protect taxpayer investments, support educator success, and ensure AI tools deliver meaningful, scalable impact for all students.
Challenge and Opportunity
For more than a decade, education technology investments have failed to deliver meaningful results—not because of technological limitations, but because of poor implementation. Despite billions of dollars in federal and local spending on devices, software, and networks, student outcomes have shown only minimal improvement. In 2020 alone, K–12 districts spent over $35 billion on hardware, software, curriculum resources, and connectivity—a 25 percent increase from 2019, driven largely by pandemic-related remote learning needs. While these emergency investments were critical to maintaining access, they also set the stage for continued growth in educational technology spending in subsequent years.
Districts that invest in professional development, technical assistance, and thoughtful integration planning consistently see stronger results, while those that approach technology as a one-time purchase do not. As the University of Washington notes, “strategic implementation can often be the difference between programs that fail and programs that create sustainable change.” Yet despite billions spent on educational technology over the past decade, student outcomes have remained largely unchanged—a reflection of systems investing in tools without building the capacity to understand their value, integrate them effectively, and use them to enhance learning. The result is telling: an estimated 65 percent of education software licenses go unused, and as Sarah Johnson pointed out in an EdWeek article, “edtech products are used by 5% of students at the dosage required to get an impact”.
Evaluation practices compound the problem. Too often, federal agencies measure adoption rates instead of student learning, leaving educators confused and taxpayers with little evidence of impact. As the CEO of the EdTech Evidence Exchange put it, poorly implemented programs “waste teacher time and energy and rob students of learning opportunities.” By tracking usage without outcomes, we perpetuate cycles of ineffective adoption, where the same mistakes resurface with each new wave of innovation.
Implementation Capacity is Foundational
A clear solution entails making implementation capacity the foundation of federal AI education funding initiatives. Other countries show the power of this approach. Singapore, Estonia, and Finland all require systematic teacher preparation, infrastructure equity, and outcome tracking before deploying new technologies, recognizing, as a Swedish edtech implementation study found, that access is necessary but not sufficient to achieve sustained use. These nations treat implementation preparation as essential infrastructure, not an optional add-on, and as a result, they achieve far better outcomes than market-driven, fragmented adoption models.
The United States can do the same. With only half of states currently offering AI literacy guidance, federal leadership can set guardrails while leaving states free to tailor solutions locally. Implementation-first policies would allow federal agencies to automate much of program evaluation by linking implementation data with existing student outcome measures, reducing administration burden and ensuring taxpayer investments translate into sustained learning improvements.
The benefits would be transformational:
- Educational opportunity. Strong implementation support can help close digital skill gaps and reduce achievement disparities. Rural districts could gain greater access to technical assistance networks, students with disabilities could benefit from AI tools designed with accessibility at their core, and all students could build the AI literacy necessary to participate in civic and economic life. Recent research suggests that strategic implementation of AI in education holds particular promise for underserved and geographically isolated communities.
- Workforce development. Educators could be equipped to use AI responsibly, expanding coherent career pathways that connect classroom expertise to emerging roles in technology coaching, implementation strategy, and AI education leadership. Students graduating from systematically implemented AI programs would enter the workforce ready for AI-driven jobs, reducing skills gaps and strengthening U.S. competitiveness against global rivals.
In short, implementation is not a secondary concern; it is the primary determinant of whether AI in education strengthens learning or repeats the costly failures of past ed-tech investments. Embedding implementation capacity reviews before large-scale rollout—focused on educator preparation, infrastructure adequacy, and support systems—would help districts identify strengths and gaps early. Paired with outcomes-based vendor contracts and tiered implementation support that reflects district capacity, this approach would protect taxpayer dollars while positioning the United States as a global leader in responsible AI integration.
Plan of Action
AI education funding must shift to being both tool-focused and outcome-focused, reducing repeated implementation failures and ensuring that states and districts can successfully integrate AI tools in ways that strengthen teaching and learning. Federal guidance has made progress in identifying priority use cases for AI in education. With stronger alignment to state and local implementation capacity, investments can mitigate cycles of underutilized tools and wasted resources.
A hybrid approach is needed: federal agencies set clear expectations and provide resources for implementation, while states adapt and execute strategies tailored to local contexts. This model allows for consistency and accountability at the national level, while respecting state leadership.
Recommendation 1. Establish AI Education Implementation Standards Through Federal–State Partnership
To safeguard public investments and accelerate effective adoption, the Department of Education, working in partnership with state education agencies, should establish clear implementation standards that ensure readiness, capacity, and measurable outcomes.
- Implementation readiness benchmarks. Federal AI education funds should be distributed with expectations that recipients demonstrate the enabling systems necessary for effective implementation—including educator preparation, technical infrastructure, professional learning networks, and data governance protocols. ED should provide model benchmarks while allowing states to tailor them to local contexts.
- Dedicated implementation support. Funding streams should ensure AI education investments include not only tool procurement but also consistent, evidence-based professional development, technical assistance, and integration planning. Because these elements are often vendor-driven and uneven across states, embedding them in policy guidance helps SEAs and local education agencies (LEAs) build sustainable capacity and protect against ineffective or commodified approaches—ensuring schools have the human and organizational capacity to use AI responsibly and effectively.
- Joint oversight and accountability. ED and SEAs should collaborate to monitor and publicly share progress on AI education implementation and student outcomes. Metrics could be tied to observable indicators, such as completion of AI-focused professional development, integration of AI tools into instruction, and adherence to ethical and data governance standards. Transparent reporting builds public trust, highlights effective practices, and supports continuous improvement, while recognizing that measures of quality will evolve with new research and local contexts.
Recommendation 2. Develop a National AI Education Implementation Infrastructure
The U.S. Department of Education, in coordination with state agencies, should encourage a national infrastructure that helps and empowers states to build capacity, share promising practices, and align with national economic priorities.
- Regional implementation hubs. ED should partner with states to create regional AI education implementation centers that provide technical assistance, professional development, and peer learning networks. States would have flexibility to shape programming to their context while benefiting from shared expertise and federal support.
- Research and evaluation. ED, in coordination with the National Science Foundation (NSF), should conduct systematic research on AI education implementation effectiveness and share annual findings with states to inform evidence-based decision-making.
- Workforce alignment. Federal and state education agencies should continue to coordinate AI education implementation with existing workforce development initiatives (Department of Labor) and economic development programs (Department of Commerce) to ensure AI skills align with long-term economic and innovation priorities.
Recommendation 3. Adopt Outcomes Based Contracting Standards for AI Education Procurement
The U.S. Department of Education should establish outcomes based contracting (OBC) as a preferred procurement model for federally supported AI education initiatives. This approach ties vendor payment directly to demonstrated student success, with at least 40% of contract value contingent on achieving agreed-upon outcomes, ensuring federal investments deliver measurable results rather than unused tools.
- Performance-based payment structures. ED should support contracts that include a base payment for implementation support and contingent payments earned only as students achieve defined outcomes. Payment should be based on individual student achievement rather than aggregate measures, ensuring every learner benefits while protecting districts from paying full price for ineffective tools.
- Clear outcomes and mutual accountability:. Federal guidance should encourage contracts that specify student populations served, measurable success metrics tied to achievement and growth, and minimum service requirements for both districts and vendors (including educator professional learning, implementation support, and data sharing protocols).
- Vendor transparency and reporting. AI education vendors participating in federally supported programs should provide real-time implementation data, document effectiveness across participating sites, and report outcomes disaggregated by student subgroups to identify and address equity gaps.
- Continuous improvement over termination. Rather than automatic contract cancellation when challenges arise, ED should establish systems that prioritize joint problem-solving, technical assistance, and data-driven adjustments before considering more severe measures.
Recommendation 4. Pilot Before Scaling
To ensure responsible, scalable, and effective integration of AI in education, ED and SEAs should prioritize pilot testing before statewide adoption while building enabling conditions for long-term success.
- Pilot-to-scale strategy. Federal and state agencies could jointly identify pilot districts representing diverse contexts (rural, urban, and suburban) to test AI implementation models before large-scale rollout. Lessons learned would inform future funding decisions, minimize risk, and increase effectiveness for states and districts.
- Enabling conditions for sustainability. States could build ongoing professional learning systems, technical support networks, and student data protections to ensure tools are used effectively over time.
- Continuous improvement loop. ED could coordinate with states to develop feedback systems that translate implementation data into actionable improvements for policy, procurement, and instruction, ensuring educators, leaders, and students all benefit.
Recommendation 5. Build a National AI Education Research & Development Network
To promote evidence-based practice, federal and state agencies should co-develop a coordinated research and development infrastructure that connects implementation data, policy learning to practice, and global collaboration.
- Implementation research partnerships. Federal agencies (ED, NSF) should partner with states and research institutions to fund systematic studies on effective AI education implementation, with emphasis on scalability and outcomes across diverse student populations. Rather than creating a new standalone program, this would coordinate existing ED and NSF investments while expanding state-level participation.
- Testbed site networks. States should designate urban, suburban, and rural AI education implementation labs or “sandboxes”, modeled on responsible AI testbed infrastructure, where funding supports rigorous evaluation, cross-district peer learning, and local adaptation.
- Evidence-to-policy pipeline. Federal agencies should integrate findings from these research-practice partnerships into national AI education guidance, while states embed lessons learned into local technical assistance and professional development.
- National leadership and evidence sharing. Federal and state agencies should establish mechanisms to share evidence-based approaches and emerging insights, positioning the U.S. as a leader in responsible AI education implementation. This collaboration should leverage continuous, practice-informed research, called living evidence, which integrates real-world implementation data, including responsibly shared vendor-generated insights, to inform policy, guide best practices, and support scalable improvements.
Conclusion
The Department’s guidance on AI in education marks a pivotal step toward modernizing teaching and learning nationwide. To realize the promise of AI in education, funding should support both the acquisition of tools and the strategies that ensure their effective implementation. To realize its promise, we must shift from funding tools to funding effective implementation. Too often, technologies are purchased only to sit on the shelf while educators lack the support to integrate them meaningfully. International evidence shows that countries investing in teacher preparation and infrastructure before technology deployment achieve better outcomes and sustain them.
Early research also suggests that investments in professional development, infrastructure, and systems integration substantially increase the long-term impact of educational technology. Prioritizing these supports reduces waste and ensures federal dollars deliver measurable learning gains rather than unused tools. The choice before us is clear: continue the costly cycle of underused technologies or build the nation’s first sustainable model for AI in education—one that makes every dollar count, empowers educators, and delivers transformational improvements in student outcomes.
Clear implementation expectations don’t slow innovation—they make it sustainable. When systems know what effective implementation looks like, they can scale faster, reduce trial-and-error costs, and focus resources on what works to ultimately improve student outcomes.
Quite the opposite. Implementation support is designed to build capacity where it’s needed most. Embedding training, planning, and technical assistance ensures every district, regardless of size or resources, can participate in innovation on an equal footing.
AI education begins with people, not products. Implementation guidelines should help educators improve their existing skills to incorporate AI tools into instruction, offer access to relevant professional learning, and receive leadership support, so that AI enhances teaching and learning.
Implementation quality is multi-dimensional and may look different depending on local context. Common indicators could include: educator readiness and training, technical infrastructure, use of professional learning networks, integration of AI tools into instruction, and adherence to data governance protocols. While these metrics provide guidance, they are not exhaustive, and ED and SEAs will iteratively refine measures as research and best practices evolve. Transparent reporting on these indicators will help identify effective approaches, support continuous improvement, and build public trust.
Not when you look at the return. Billions are spent on tools that go underused or abandoned within a year. Investing in implementation is how we protect those investments and get measurable results for students.
The goal isn’t to add red tape—it’s to create alignment. States can tailor standards to local priorities while still ensuring transparency and accountability. Early adopters can model success, helping others learn and adapt.
In Honor of Patient Safety Day, Four Recommendations to Improve Healthcare Outcomes
Through partnership with the Doris Duke Foundation, FAS is working to ensure that rigorous, evidence-based ideas on the cutting edge of disease prevention and health outcomes are reaching decision makers in an effective and timely manner. To that end, we have been collaborating with the Strengthening Pathways effort, a series of national conversations held in spring 2025 to surface research questions, incentives, and overlooked opportunities for innovation with potential to prevent disease and improve outcomes of care in the United States. FAS is leveraging its skills in policy entrepreneurship, working with session organizers, to ensure that ideas surfaced in these symposia reach decision-makers to drive impact in active policy windows.
On this World Patient Safety Day 2025, we share a set of recommendations that align with the National Quality Strategy of Centers for Medicare and Medicaid Services (CMS) goal for zero preventable harm in healthcare. Working with Patients for Patient Safety US, which co-led one of Strengthening Pathways conversations this spring with the Johns Hopkins University Armstrong Institute for Patient Safety and Quality, the issue brief below outlines a bold, modernized approach that uses Artificial Intelligence technology to empower patients and drive change. FAS continues to explore the rapidly evolving AI and healthcare nexus.
Patient safety is an often-overlooked challenge in our healthcare systems. Whether safety events are caused by medical error, missed or delayed diagnoses, deviations from standards of care, or neglect, hundreds of billions of dollars and hundreds of thousands of lives are lost each year due to patient safety lapses in our healthcare settings. But most patient safety challenges are not really captured and there are not enough tools to empower clinicians to improve. Here we present four critical proposals for improving patient safety that are worthy of attention and action.
Challenge and Opportunity
Reducing patient death and harm from medical error surfaced as a U.S. public health priority at the turn of the century with the landmark National Academy of Sciences (NAS) report, To Err is Human: Building a Safer Health System (2000). Research shows that medical error is the 3rd largest cause of preventable death in the U.S. Analysis of Medicare claims data and electronic health records by the Department of Health and Human Services (DHHS) Office of the Inspector General (OIG) in a series of reports from 2008 to 2025 consistently finds that 25-30% of Medicare recipients experience harm events across multiple healthcare settings, from hospitals to skilled nursing facilities to long term care hospitals to rehab centers. Research on the broader population finds similar rates for adult patients in hospitals. The most recent study on preventable harm in ambulatory care found that 7% of patients experienced at least one adverse event, with wide variation of 1.8% to 23.6% from clinical setting to clinical setting. Improving diagnostic safety has emerged as the largest opportunity for patient harm prevention. New research estimates 795,000 patients in the U.S. annually experience death or harm due to missed, delayed or ineffectively communicated diagnoses. The annual cost to the health care system of preventable harm and its health care cascades is conservatively estimated to exceed $200 billion. This cost is ultimately borne by families and taxpayers.
In its National Quality Strategy, the Centers for Medicare and Medicaid Services (CMS) articulated an aspirational goal of zero preventable harm in healthcare. The National Action Alliance for Patient and Workforce Safety, now managed by the Agency for Healthcare Research and Quality (AHRQ), has a goal of 50% reduction in preventable harm by 2026. These goals cannot be achieved without a bold, modernized approach that uses AI technology to empower patients and drive change. Under-reporting negative outcomes and patient harms keeps clinicians and staff from identifying and implementing solutions to improve care. In its latest analysis (July 2025), the OIG finds that fewer than 5% of medical errors are ever reported to the systems designed to gather insights from them. Hospitals failed to capture half of harm events identified via medical record review, and even among captured events, few led to investigation or safety improvements. Only 16% of events required to be reported externally to CMS or State entities were actually reported, meaning critical oversight systems are missing safety signals entirely.
Multiple research papers over the last 20 years find that patients will report things that providers do not. But there has been no simple, trusted way for patient observations to reach the right people at the right time in a way that supports learning and Improvement. Patients could be especially effective in reporting missed or delayed diagnoses, which often manifest across the continuum of care, not in one healthcare setting or a single patient visit. The advent of AI systems provides an unprecedented opportunity to address patient safety and improve patient outcomes if we can improve the data available on the frequency and nature of medical errors. Here we present four ideas for improving patient safety.
Recommendation 1. Create AI-Empowered Safety Event Reporting and Learning System With and For Patients
The Department of Health and Human Services (HHS) can, through CMS, AHRQ or another HHS agency, develop an AI-empowered National Patient Safety Learning and Reporting System that enables anyone, including patients and families, to directly report harm events or flag safety concerns for improvement, including in real or near real time. Doing so would make sure everyone in the system has the full picture — so healthcare providers can act quickly, learn faster, and protect more patients.
This system will:
- Develop a reporting portal to collect, triage and analyze patient reported data directly from beneficiaries to improve patient and diagnostic safety.
- Redesign and modernize Consumer Assessment of Healthcare Providers and Systems
(CAHPS) surveys to include questions that capture beneficiaries’ experiences and outcomes related to patient and diagnostic safety events.
- Redefine the Beneficiary and Family Centered Care Quality Improvement Organizations (BFCC QIO) scope of work to integrate the QIOs into the National Patient Safety Learning and Reporting System.
The learning system will:
- Use advanced triage (including AI) to distinguish high-signal events and route credible
reports directly to the care team and oversight bodies that can act on them.
- Solicit timely feedback and insights in support of hospitals, clinics, and nursing homes to prevent recurrence, as well as feedback over time on patient outcomes that manifest later, e.g. as a result of missed or delayed diagnoses.
- Protect patients and providers by focusing on efficacy of solutions, not blame assignment.
- Feed anonymized, interoperable data into a national learning network that will spot systemic risks sooner and make aggregated data available for transparency and system learning.
Recommendation 2. Create a Real-time ‘Patient Safety Dashboard’ using AI
HHS should build an AI-driven platform that integrates patient-reported safety data — including data from the new National Patient Reporting and Learning System, recommended above — with clinical data from electronic health records to create a real-time ‘patient safety dashboard’ for hospitals and clinics. This dashboard will empower providers to improve care in real time, and will:
- Assist health care providers make accurate and timely diagnoses and avoid errors.
- Make patient reporting easy, effective, and actionable.
- Use AI to triage harm signals and detect systemic risk in real time.
- Build shared national infrastructure for healthcare reporting for all stakeholders.
- Align incentives to reward harm reduction and safety.
By harnessing the power of AI providers will be able to respond faster, identify patients at risk more effectively, and prevent harm thereby improving outcomes. This “central nervous system” for patient safety will be deployed nationally to help detect safety signals in real time, connect information across settings, and alert teams before harm occurs.
Recommendation 3. Mine Billing Data for Deviations from Standards of Care
Standards of care are guidelines that define the process, procedures and treatments that patients should receive in various medical and professional contexts. Standards ensure that individuals receive appropriate and effective care based on established practices. Most standards of care are developed and promulgated by medical societies. But not all clinicians and clinical settings adhere to standards of care, and deviations from standards of care are normal depending upon the case before them. Nonetheless, standards of care exist for a reason and deviations from standards of care should be noted when medical errors result in negative outcomes for patients so that clinicians can learn from these outcomes and improve.
Some patient safety challenges are evident right in the billing data submitted to CMS and insurers. For example, deviations from standards of care can be captured in billing data by comparing clinical diagnosis codes with billing codes and then compared to widely accepted standards of care. By using CMS billing data, the government could identify opportunities for driving the development, augmentation, and wider adoption of standards of care by showing variability and compliance with standards of care for patients, reducing medical error and improving outcomes.
Giving standard setters real data to adapt and develop new standards of care is a powerful tool for improving patient outcomes.
Recommendation 4. Create a Patient Safety AI Testbed
HHS can also establish a Patient Safety AI Testbed to evaluate how AI tools used in diagnosis, monitoring, and care coordination perform in real-world settings. This testbed will ensure that AI improves safety, not just efficiency — and can be co-led by patients, clinicians, and independent safety experts. This is an expansion of the testbeds in the HHS AI Strategic Plan.
The Patient Safety Testbed could include:
- Funding for independent AI test environments to monitor real-world safety and performance over time.
- Public reliability benchmarks and “AI safety labeling”.
- Required participation by AI vendors and provider systems.
Conclusion
There are several key steps that the government can take to address the major loss of health, dollars, and lives due to medical errors, while simultaneously bolstering treatment guidelines, driving the development of new transparent data, and holding the medical establishment accountable for improving care. Here we present four proposals. None of them are particularly expensive when juxtaposed against the tremendous savings they will drive throughout our healthcare system. We can only hope that the Administration’s commitment to patient safety is such that they will adopt them and drive a new era where caregivers, healthcare systems and insurance payers work together to improve patient safety and care standards.
This memo produced as part of Strengthening Pathways to Disease Prevention and Improved Health Outcomes.
ASTRA: An American Space Transformation Regulatory Act
From helping farmers maximize crop yields to creating new and exotic pathways for manufacturing, the space economy has the potential to triple over the next decade. Unlocking abundance in the space economy will require lowering the barriers for new space actors, aligning with international partners, and supporting traditional measures of risk assessment (like insurance) to facilitate space investment.
Unlike countries with newer space programs that can benefit from older programs’ experience, exploration, and accidents, the United States has organically developed a patchwork regime to manage human and non-human space flight. While this approach serves and supports the interests of government agencies and their mission-specific requirements, it hinders the deployment of new and novel technologies and gives other countries motive to deploy extraterritorial regulatory regimes, further complicating the outlook for new space actors. There is an urgent need for rationalization, as well as for a clear and logical pathway for the deployment of new technologies, and to facilitate responsible activities in orbit so that space resources need not be governed by scarcity.
As the impacts of human space activities become more clear, there is also a growing need to address the sustainability of human space operations and their capacity to restrain a more abundant human future. While the recent space commercialization executive order attempts to rationalize some of this work, it also preserves some of the regulatory disharmony that exists in the current system while taking actions that are likely to create additional conflicts with impacted communities. The United States should re-take the lead; among the examples set by New Zealand, the European Union, and other emerging space actors; in providing a comprehensive space regulatory framework that ensures the safe, sustainable, and responsible growth of the space industry.
Challenge and Opportunity
The Outer Space Treaty creates a set of core responsibilities that must be followed by any country wishing to operate a space program, including (but not limited to) international responsibility for national activities, authorization and supervision of space activities carried out by non-governmental entities, and liability for damage caused to other countries. In the United States, individual government agencies have adopted responsibilities over individual elements of human activity in space, including (but again, not limited to) the Federal Aviation Administration (FAA) over launch and reentry, the Department of Commerce (DOC) over remote sensing, the Federal Communications Commission (FCC) and DOC over spectrum management, and the State Department and DOC over space-related export controls. The FCC has also asserted its regulatory authority over space into other domains, in particular the risk of in-space collision and space debris. If a company wishes to launch a satellite with remote sensing capabilities, they need to participate in every single one of these regulatory permitting processes.
Staffing and statutory authority create significant challenges for American space regulators at a time when other countries are getting their respective regulatory houses in order. The offices that manage these programs are relatively small– the Commercial Remote Sensing Regulatory Affairs (CRSRA) division of the OSC currently staffed by two full time government employees while the FAA has only five handling space flight authorizations. CRSRA was briefly hamstrung earlier this year when its director was released (and then immediately rehired) as part of the Trump Administration’s firing of probationary employees at the National Oceanic and Atmospheric Administration–likely collateral damage in the Administration’s attempts to target programs that interact with climate change.
The lack of personnel capacity creates particular challenges for the FAA, which has struggled to keep pace with launch approvals under its Part 450 launch authorization process, among a record-breaking number of mishap investigations, novel space applications, and application revisions in 2023. Last year, FAA officials testified that the increase in SpaceX launches, alone, has led to hundreds of hours of monthly overtime logged, constituting over 80% of staff overtime paid for by the American taxpayer. Other companies have described that accidents from SpaceX-related launches create shifting goalposts for their companies, pushing FAA officials to avoid confirming receipt of necessary launch documents to avoid starting Part 450’s 180 day review deadline. The shifting goalposts and prior approvals also means that certain launch vehicles are subject to different requirements, creating incentives for companies to focus their efforts on non-commercial and defense-related missions.
Without updates to the law, the statutory justification for increasingly important regulatory responsibilities is also unsound, particularly those that pertain to orbital debris. After the Supreme Court’s Loper Bright ruling, it is unlikely that the theory of law underpinning FCC’s regulation of space debris could withstand court challenges. This creates a particularly dangerous situation given the long-term impact that the breakup of even small objects can have on the orbital environment, and is likely the reason that space companies have yet to openly challenge the FCC’s assertion of regulatory authority in this space. This also challenges the insurance industry, which has suffered significant financial losses over the past few years and caused certain companies to pull out of the market entirely.
As human space activities increase, the demands created by the Outer Space Treaty’s requirement for supervision and liability are likely to also see corresponding increases. Some countries hosting astronomical observatories that are significantly impaired by light, radio, and other electromagnetic pollution from commercial spacecraft have enacted laws relating to satellite brightness and interference. The number of high-profile debris strikes on property – like when a metal component from the international space station crashed into a Florida family’s occupied home – will also increase as second stages of rockets and larger satellites return to earth. The Mexican government is exploring options to sue SpaceX over environmental contamination and debris near its Starbase Texas launch site.
The unique properties of interstellar space and other planetary surfaces demand other considerations we take for granted on Earth. On Earth, we consider the flexibility of nature to “heal itself” and revert to “natural” states of being, regrowth, and regeneration of destructive resource extraction. Instead, planetary surfaces with little to no atmosphere or wind, such as the moon, will preserve footprints and individual tire tracks for decades to thousands of years, altering the geological features. Flecks of paint and bacteria from rovers can create artificial signatures in spectroscopy and biology, contaminating science in undocumented ways that are likely to interrupt astrobiology and geology for generations to come. We risk rendering an advanced human civilization unable to unlock discoveries resulting from pristine science or explore the existence of extraterrestrial life.
Significant safety concerns resulting from increased human space activities could create additional regulatory molasses if unaddressed. An increasing and under-characterized population of debris increases risk to multi-million dollar instruments and continued operations in the event of a collision cascade. Current studies – both conservative and optimistic – point to the fact we are already in the regime of “unstable” debris growth in orbit, complicating the mass-deployment of large constellations.
Unfortunately, current international law creates challenges for the mass-removal of orbital debris. Article 8 of the Outer Space Treaty establishes that ownership of objects in space does not change by virtue of being in space. This is done to make the seizure of other countries’ objects illegal, and it isn’t difficult to imagine weaponizing satellite removal capabilities (seen in the James Bond film “You Only Live Twice”). If it is not financially advantageous to mitigate space debris, or export control concerns prevent countries from allowing debris removal, then the most-likely long term results are either unchecked debris growth, likely leading to increasingly draconian regulatory requirements. None of this is good for industry.
Absent a streamlined data-sharing platform of satellite location and telemetry, which could be decimated by federal cuts to the Traffic Coordination System for Space (TraCSS), the cost and responsibility of satellite collision and debris avoidance will encourage many commercial space operators to fly blind. The underutilized space insurance industry, already reeling from massive losses in recent years, would face another source of pressure. If the barriers to satellite servicing and recovery satellites remain high, it is probable that the only market for such capabilities will be military missions, inherently inhibiting the ability of these systems to attend to the orbital environment.
While abundance speaks to increasing available resources, chemistry and the law of conservation of matter remind us that our atmosphere and the oxygen we breathe is finite, fragile, and potentially highly reactive to elements commonly found in spacecraft. We are only starting to understand the impact of spacecraft reentry on the upper atmosphere, though there is already significant cause for concern. Nitrous oxide (NOx), a common compound used in spacecraft propulsion, is known to deplete ozone. Aluminum, one of the most common elements in spacecraft, bonds easily with ozone. Black carbon from launches increases stratospheric temperature, changing circulation patterns. When large rockets explode, the aftermath can create enormous impacts for aviation and rain debris on beaches and critical areas. To top it all off, the reliance of space companies on the defense sector for financing means that many of these assets and constellations are often inherently tied to defense activities, increasing the probability that they will be actively targeted or compromised as a result of foreign policy actions or fast-tracked due to regulatory streamlining that circumvents public comment periods from raising valid safety concerns.
We are quickly approaching a day when the United States government may no longer be the primary regulator of our own industry. The European Union in May 2025 introduced its own Space Act with extraterritorial requirements for companies wishing to participate in the European market. Many of these provisions are well-considered and justified, though the uncertainty and extra layer of compliance that they create for American companies is likely to increase the cost of business further. The EU has created a process for recognizing equivalent regimes in other countries. Under current rules, and especially under the Administration’s new commercial space executive order, the United States regulatory regime is unlikely to be judged as “equivalent.” Given the concerns from EU member states and companies alike about the actions of U.S. space companies, it is more likely than not that the EU will seek to rein in the U.S. space industry in ways that could limit our ability to remain internationally competitive.
Plan of Action
Recommendation 1. Congress should devote resources to study that which threatens the abundance of space, such as the impacts of human space exploration, damage to the ozone layer; inadvertent geoengineering as a result of orbital reentry and fuel deposition.
This should include the impact of satellite interference on space situational awareness capabilities and space weather forecasting, which are critical to stabilizing the space economy.
While the regulatory environment for space should be rationalized to unleash the potential of the space economy, research is also needed to better understand the impacts of space activities and exploration given that we are already beginning to feel the impacts of space activities terrestrially. Having an abundant space economy is meaningless if the continual reentry of satellites destroys the ozone layer and renders the planet uninhabitable. Congress should continue to fund research on the upper atmosphere and protect research done by the NOAA Chemical Sciences Laboratory to understand the upper atmospheric impacts from human space activities.
The astronomy community has also voiced significant concerns about the impact of satellites on their observations. Satellites show up as bright streaks in the sky when taking pictures and can cause significant disruption to radio telescopes and weather forecasting sensors, alike. This impact is not only felt by ground-based telescopes and sensors, but also those in orbit like the Hubble. This could have consequences for tracking other interstellar phenomena, including (but not limited to) space debris, space weather, cislunar space domain awareness, and planetary defense. Further, light pollution inhibits our ability to discover new physics through astronomical observations–the Hubble Tension, neutrino mass problem, quantum gravity, and matter-antimatter imbalance all suggest that there are major discoveries waiting for us on the horizon. Failure to preserve the sky could inadvertently restrain our ability to unleash a technological revolution akin to the one that produced Einstein’s theory of relativity and the nuclear age.
There is still much more work to be done to understand these topics and to develop workable solutions that can be adopted by new space actors. The bipartisan Dark and Quiet Skies Act, introduced in 2024, narrowly addresses but one of these needs; sustained support for NASA, NOAA, and NSF science are all necessary given the technology required for taking measurements in the stratosphere and advanced metrology.
Recommendation 2. Congress should create an independent Space Promotion and Regulatory Agency.
Ideally, Congress should create a new and independent space promotion and regulatory agency whose activities would include both the promotion of civil and commercial space activities and provide for authorization and supervision of all U.S.-based commercial space organizations. This body, whose activities should be oriented to fulfilling U.S. obligations under the Outer Space Treaty, should be explicitly empowered to also engage in space traffic coordination or management, to manage liability for U.S. space organizations, and to rationalize all existing permitting processes under one organization. Staff from existing agencies (which is typically 2–25 people) should be relocated from existing departments and agencies to this new body to provide for continuity of operations and institutional knowledge.
Congress should seek to maintain a credible firewall between the promotion and regulatory elements of the organization. The promotion element could be responsible for providing assistance to companies (including through loans and grants for technology and product development like the DOE Loan Programs Office, and also general advocacy). The regulatory element should be responsible for domestic licensing space activities, operating the Traffic Coordination System for Space (TraCSS), and any other supervision activities that may become necessary. This would be distinct from the existing Office of Space Commerce function in that the organization would be independent, have the ability to regulate space commerce, and ideally have resources to fulfill the advocacy and promotion elements of the mission.
In an ideal world, the Office of Space Commerce (OSC) would be able to fulfill this mission with an expanded mission mandate, regulatory authority, and actual resources to promote commercial space development. In practice, recent events under both administrations have pointed toward the office being isolated within the National Oceanic and Atmospheric Administration under the Biden Administration while running into similar bottlenecks with the Secretary of Commerce in the second Trump Administration. Independent authority and resourcing would not only give the director greater plenary authority, but also allow them to better balance the views of interagency partners (and hopefully shedding some of the baggage that comes from broader relationships between government departments with broad mandates).
This recommendation explicitly does not suggest eliminating the FAA or OSC’s functions, but rather merging the two, preserving current staff and institutional knowledge, and allowing them to work in the same (and independent) organization to make it easier to share knowledge and information. Creating a new regulatory agency on top of the FAA or OSC is not recommended; the purpose is to streamline. Preference would be given toward assigning all of the functions to one actor or another rather than creating a new and duplicative function on top of the existing structures in Commerce and FAA.
Given the significant terrestrial impact of spectrum issues related to space, delegating those functions to the FCC and NTIA probably still makes sense, so long as orbital debris and other space regulatory functions are consolidated into a new body that is clearly given such regulatory authority by Congress.
Recommendation 3. Congress should consider requiring that insurance be purchased for all space activities to address the Outer Space Treaty’s liability requirements.
Insurance ensures that nascent areas of growth are minimally disruptive to other interests, i.e. damaging critical infrastructures such as spraying GPS satellites with debris shrapnel, or harming the general public when skyscraper-sized pressurized fuel tanks explode on the ground. Insurance is broadly recognized for its ability to help create the type of market stability that is necessary for large capital investments and promote long-term infrastructure improvements.
The participation of insurance markets is also more likely to encourage venture capital and financial industry participation in commercial space activities, moving the market from dependency on government funding toward self-sustaining commercial enterprise. Despite this, out of 13,000 active satellites, only about 300 are insured. The satellite insurance industry’s losses have been staggering over the last two years, making the pricing of risk difficult for new space actors and investors alike. Correct pricing of risk is essential for investors to be able to make informed decisions about which companies or enterprises to invest in.
Current insurance covers $500 million in damages to third parties – any costs beyond this are drawn from the reservoir of the American taxpayer (unless damages exceed a ceiling cap for the government of about $3.1 billion). The current incentive structure favors the deployment of cheap, mass produced satellites over more sophisticated vehicles that drive technological leadership and progress. The failure or loss of control over such assets can create a permanent hazard to the orbital environment and increase the risk of a collision cascade over the lifetime of the object. Increasing the number of covered satellites should help more correctly price overall market risk, making space investments more accessible and attractive for companies looking to deploy commercial space stations; in space servicing, assembly, and manufacturing satellites; and other similarly sophisticated investments. These types of technologies are more likely to contribute to abundance in the broader market, as opposed to a temporary, mass-produced investment that does only one thing and ends in a loss of everyone’s long-term access to specific orbits.
The Outer Space Treaty’s liability provisions make a healthy and risk-based insurance market particularly important. If a country or company invests in a small satellite swarm, and some percentage of that swarm goes defunct and produces a collision cascade and/or damages on-the ground assets, then U.S. entities (including the government) could be on the hook for potentially unlimited liabilities in a global multi-trillion dollar space economy. It is almost certain that the United States government has not adequately accounted for such an event and that risk is not currently priced into the market.
A thriving insurance market can also help facilitate other forms of investment, which may become more confident in their investments and tolerant of other risks associated with investment. It would also serve as an important signal to international partners that the United States is willing to act responsibly in the orbital environment and has the capacity to create the financial incentive schemes to honor its commitments. By requiring insurance, Congress can use the prescriptive power of law to ensure transparency for both investors and the general public.
Recommendation 4. The United States should create an inventory of abandoned objects and establish rules governing the abandonment of objects to enable commercial orbital salvage operations.
Given that Article 8 of the Outer Space Treaty could serve as an impediment to orbital debris removal, countries could establish rules or lists of objects that have reached end of life and are now effectively abandoned. The Treaty does not necessarily prevent State Parties from creating rules governing the authorization and supervision of objects, including transfer of ownership at the end of a mission. An inventory of abandoned objects that are “OK for recovery” could help manage concerns related to export controls, intellectual property, or other issues associated with one country recovering another country’s objects. Likewise, countries could also explore the creation of salvage rights or rules to incentivize orbital debris removal missions.
Recommendation 5. The State Department should seek equivalency for the United States under the EU Space Act as soon as possible, and seek to engage the EU in productive discussions to limit the probability of regulatory divergence, probably more than doubling the regulatory burden placed on U.S. companies.
With the introduction of the EU Space Act, the primary regulator for U.S. space companies with an international presence is likely to be the European Union. The U.S. Department of State should continue to pursue constructive engagement with the European Commission, Parliament, and Council to limit the risk of regulatory divergence and to ensure that the United States provides adequate safeguards to quickly achieve equivalency, obviating the need for U.S. space companies to worry about compliance with more than one country’s framework. This would ultimately result in lower regulatory burden for the United States, particularly if measures are taken to consolidate the existing U.S. space regulatory environment as described in Recommendation 2.
The failure of the U.S. to get its own house in order is likely to motivate other countries to take similar measures, increasing compliance costs for American companies while foreign operators may only need to rely on their domestic frameworks. Without equivalency, U.S. operators are likely to have to deal with multiple competing regulatory regimes, especially given the past history of other countries outside the EU adopting EU regulatory frameworks in order to secure market access (the Brussels Effect).
There is a foreign policy need for the U.S. and EU to get on the same page (and fast). Given that companies from the United States are more likely to seek access to European markets than those in the PRC, an asymmetric space policy environment opens a new sphere for contentious policy negotiations between the U.S. and EU. Transatlantic alignment is likely to produce greater leverage in negotiations with the PRC while creating a more stable market where U.S. and European industry can both thrive. Similarly, an antagonistic relationship is more likely to push the European Union toward greater strategic autonomy. Fear of dependence on U.S. companies is already creating new barriers for the United States in other areas, and space has been specifically called out as a key area of concern.
Further, space actors are less familiar with the extent to which trade negotiations can result in asymmetric concessions that could disadvantage one industry to gain benefits in another. To put it bluntly, it is unlikely that President Trump will go to bat for SpaceX (especially given his current relationship with its owner) if it means giving up opportunities to sell American farm exports. One need only look at the recent semiconductor export controls decision, allegedly done to facilitate a bilateral meeting between the two presidents in Beijing.
Conclusion
Unlocking the abundance of the space economy, and doing so responsibly, will require the development of a stable and trustworthy regulatory environment, repairing frameworks that enable monopolistic behavior, and correct pricing of risk in order to facilitate sustainable investment in the outer space environment. Abundance in one realm at the expense of all others (like when a new spacecraft pauses all air traffic in the Caribbean after exploding) is no longer “abundance.” If the United States does not act soon, the deployment of more modern regulatory frameworks by other countries offering a more agile environment for new technology deployment is likely to accelerate the growth of their advantages in orbit.
If space is there, and if we are going to climb it, then regulatory reform must be a challenge that we are willing to accept, something that we are unwilling to postpone, for a competition that we intend to win.
Clean Water: Protecting New York State Private Wells from PFAS
This memo responds to a policy need at the state level that originates due to a lack of relevant federal data. The Environmental Protection Agency (EPA) has a learning agenda question that asks,“To what extent does EPA have ready access to data to measure drinking water compliance reliably and accurately?” This memo fills that need because EPA doesn’t measure private wells.
Per- and polyfluoroalkyl substances (PFAS) are widely distributed in the environment, in many cases including the contamination of private water wells. Given their links to numerous serious health consequences, initiatives to mitigate PFAS exposure among New York State (NYS) residents reliant on private wells were included among the priorities outlined in the annual State of the State address and have been proposed in state legislation. We therefore performed a scenario analysis exploring the impacts and costs of a statewide program testing private wells for PFAS and reimbursing the installation of point of entry treatment (POET) filtration systems where exceedances occur.
Challenge and Opportunity
Why care about PFAS?
Per- and polyfluoroalkyl substances (PFAS), a class of chemicals containing millions of individual compounds, are of grave concern due to their association with numerous serious health consequences. A 2022 consensus study report by the National Academies of Sciences, Engineering, and Medicine categorized various PFAS-related health outcomes based on critical appraisal of existing evidence from prior studies; this committee of experts concluded that there is high confidence of an association between PFAS exposure and (1) decreased antibody response (a key aspect of immune function, including response to vaccines) (2) dyslipidemia (abnormal fat levels in one’s blood), (3) decreased fetal and infant growth, and (4) kidney cancer, and moderate confidence of an association between PFAS exposure and (1) breast cancer, (2) liver enzyme alterations, (3) pregnancy-induced high blood pressure, (4) thyroid disease, and (5) ulcerative colitis (an autoimmune inflammatory bowel disease).
Extensive industrial use has rendered these contaminants virtually ubiquitous in both the environment and humans, with greater than 95% of the U.S. general population having detectable PFAS in their blood. PFAS take years to be eliminated from the human body once exposure has occurred, earning their nickname as “forever chemicals.”
Why focus on private drinking water?
Drinking water is a common source of exposure.
Drinking water is a primary pathway of human exposure. Combining both public and private systems, it is estimated that approximately 45% of U.S. drinking water sources contain at least one PFAS. Rates specific to private water supplies have varied depending on location and thresholds used. Sampling in Wisconsin revealed that 71% of private wells contained at least one PFAS and 4% contained levels of perfluorooctanoic acid (PFOA) or perfluorooctanesulfonic acid (PFOS), two common PFAS compounds, exceeding Environmental Protection Agency (EPA)’s Maximum Contaminant Levels (MCLs) of 4 ng/L. Sampling in New Hampshire, meanwhile, found that 39% of private wells exceeded the state’s Ambient Groundwater Quality Standards (AGQS), which were established in 2019 and range from 11-18 ng/L depending on the specific PFAS compound. Notably, while the EPA MCLs represent legally enforceable levels accounting for the feasibility of remediation, the agency has also released health-based, non-enforceable Maximum Contaminant Level Goals (MCLGs) of zero for PFOA and PFOS.
PFAS in private water are unregulated and expensive to remediate.
In New York State (NYS), nearly one million households rely on private wells for drinking water; despite this, there are currently no standardized well testing procedures and effective well water treatment is unaffordable to many New Yorkers. As of April 2024, the EPA has established federal MCLs for several specific PFAS compounds and mixtures of compounds and its National Primary Drinking Water Regulations (NPDWR) require public water systems to begin monitoring and publicly reporting levels of these PFAS by 2027; if monitoring reveals exceedances of the MCLs, public water systems must also implement solutions to reduce PFAS by 2029. In contrast, there are no standardized testing procedures or enforceable limits for PFAS in private water. Additionally, testing and remediating private wells are both associated with high costs which are unaffordable to many well owners; prices range in hundreds of dollars for PFAS testing and can cost several thousands of dollars for the installation and maintenance of effective filtration systems.
How are states responding to the problem of PFAS in private drinking water?
Several states, including Colorado, New Hampshire, and North Carolina, have already initiated programs offering well testing and financial assistance for filters to protect against PFAS.
- After piloting its PFAS Testing and Assistance (TAP) program in one county in 2024, Colorado will expand it to three additional counties in 2025. The program covers the expenses of testing and a $79 nano pitcher (point-of-use) filter. Residents are eligible if PFOA and/or PFOS in their wells exceeds EPA MCLs of 4 ng/L; filters are free if their household income is ≤80% of the area median income and offered at a 30% discount if this income criteria is not met.
- The New Hampshire (NH) PFAS Removal Rebate Program for Private Wells offers greater flexibility and higher cost coverage than Colorado PFAS TAP, with reimbursements of up to $5000 offered for either point-of-entry or point-of-use treatment system installation and up to $10,000 offered for connection to a public water system. Though other residents may also participate in the program and receive delayed reimbursement, households earning ≤80% of the area median family income are offered the additional assistance of payment directly to a treatment installer or contractor (prior to installation) so as to relieve the applicant of fronting the cost. Eligibility is based on testing showing exceedances of the EPA MCLs of 4 ng/L for PFOA or PFOS or 10 ng/L for PFHxS, PFNA, or HFPO-DA (trademarked as “GenX”).
- The North Carolina PFAS Treatment System Assistance Program offers flexibility similar to New Hampshire in terms of the types of water treatment reimbursed, including multiple point-of-entry and point-of-use filter options as well as connection to public water systems. It is additionally notable for its tiered funding system, with reimbursement amounts ranging from $375 to $10,000 based on both the household’s income and the type of water treatment chosen. The tiered system categorizes program participants based on whether their household income is (1) <200%, (2) 200-400%, or (3) >400% the Federal Poverty Level (FPL). Also similar to New Hampshire, payments may be made directly to contractors prior to installation for the lowest income bracket, who qualify for full installation costs; others are reimbursed after the fact. This program uses the aforementioned EPA MCLs for PFOA, PFOS, PFHxS, PFNA, or HFPO-DA (“GenX”) and also recognizes the additional EPA MCL of a hazard index of 1.0 for mixtures containing two or more of PFHxS, PFNA, HFPO-DA, or PFBS.
An opportunity exists to protect New Yorkers.
Launching a program in New York similar to those initiated in Colorado, New Hampshire, and North Carolina was among the priority initiatives described by New York Governor Kathy Hochul in the annual State of the State she delivered in January 2025. In particular, Hochul’s plans to improve water infrastructure included “a pilot program providing financial assistance for private well owners to replace or treat contaminated wells.” This was announced along with a $500 million additional investment beyond New York’s existing $5.5 billion dedicated to water infrastructure, which will also be used to “reduce water bills, combat flooding, restore waterways, and replace lead service lines to protect vulnerable populations, particularly children in underserved communities.” In early 2025, the New York Legislature introduced Senate Bill S3972, which intended to establish an installation grant program and a maintenance rebate program for PFAS removal treatment. Bipartisan interest in protecting the public from PFAS-contaminated drinking water is further evidenced by a hearing focused on the topic held by the NYS Assembly in November 2024.
Though these efforts would likely initially be confined to a smaller pilot program with limited geographic scope, such a pilot program would aim to inform a broader, statewide intervention. Challenges to planning an intervention of this scope include uncertainty surrounding both the total funding which would be allotted to such a program and its total costs. These costs will be dependent on factors such as the eligibility criteria employed by the state, the proportion of well owners who opt into sampling, and the proportion of tested wells found to have PFAS exceedances (which will further vary based on whether the state adopts EPA MCLs or NYS Department of Health MCLs, which are 10 ng/L for PFOA and PFOS). We allay the uncertainty associated with these numerous possibilities by estimating the numbers of wells serviced and associated costs under various combinations of 10 potential eligibility criteria, 5 possible rates (5, 25, 50, 75, and 100%) of PFAS testing among eligible wells, and 5 possible rates (5, 25, 50, 75, and 100%) of PFAS>MCL and subsequent POET installation among wells tested.
Scenario Analysis
Key findings
- Over 900,000 residences across NYS are supplied by private drinking wells (Figure 1).
- The three most costly scenarios were offering testing and installation rebates for (Table 1):
- Every private well owner (901,441 wells; $1,034,403,547)
- Every well located within a census tract designated as disadvantaged (based on NYS Disadvantaged Community (DAC) criteria) AND/OR belonging to a household with annual income <$150,000 (725,923 wells; $832,996,643)
- Every well belonging to a household with annual income <$150,000 (705,959 wells; $810,087,953)
- The three least costly scenarios were offering testing and installation rebates for (Table 1):
- Every well located within a census tract in which at least 51% of households earn below 80% of the area median income (22,835 wells; $26,191,688)
- Every well belonging to a household earning <100% of the Federal Poverty Level (92,661 wells; $106,328,398)
- Every well located within a census tract designated as disadvantaged (based on NYS Disadvantaged Community (DAC) criteria) (93,840 wells; $107,681,400)
- Of six income-based eligibility criteria, household income <$150,000 included the greatest number of wells, whereas location within a census tract in which at least 51% of households earn below 80% the area median income (a definition of low-to-moderate income used for programs coordinated by the U.S. Department of Housing and Urban Development), included the fewest wells. This amounts to a cost difference of $783,896,265 between these two eligibility scenarios.
- Six income-based criteria varied dramatically in terms of their inclusion of wells across NYS which fall within either disadvantaged or small communities (Table 2):
- For disadvantaged communities, this ranged from 12% (household income <100% federal poverty level) to 79% (income <$150,000) of all wells within disadvantaged communities being eligible.
- For small communities, this ranged from 2% (census tracts in which at least 51% of households earn below 80% area median income) to 83% (income <$150,000) of all wells within small communities being eligible.
Plan of Action
New York State is already considering a PFAS remediation program (e.g., Senate Bill S3972). The 2025 draft of the bill directed the New York Department of Environmental Conservation to establish an installation grant program and a maintenance rebate program for PFAS removal treatment, and establishes general eligibility criteria and per-household funding amounts. To our knowledge, S3972 did not pass in 2025, but its program provides a strong foundation for potential future action. Our suggestions below resolve some gaps in S3972, including additional detail that could be followed by the implementing agency and overall cost estimates that could be used by the Legislature when considering overall financial impacts.
Recommendation 1. Remediate all disadvantaged wells statewide
We recommend including every well located within a census tract designated as disadvantaged (based on NYS Disadvantaged Community (DAC) criteria) and/or belonging to a household with annual income <$150,000 as the eligibility criteria which protects the widest range of vulnerable New Yorkers. Using this criteria, we estimate a total program cost of approximately $833 million, or $167 million per year if the program were to be implemented over a 5-year period. Even accounting for the other projects which the state will be undertaking at the same time, this annual cost falls well within the additional $500 million which the 2025 State of the State reports will be added in 2025 to an existing $5.5 million state investment in water infrastructure.
Recommendation 2. Target disadvantaged census tracts and household incomes
Wells in DAC census tracts accounts for a variety of disadvantages. Including NYS DAC criteria helps to account for the heterogeneity of challenges experienced by New Yorkers by weighing statistically meaningful thresholds for 45 different indicators across several domains. These include factors relevant to the risk of PFAS exposure, such as land use for industrial purposes and proximity to active landfills.
Wells in low-income households account for cross-sectoral disadvantage. The DAC criteria alone is imperfect:
- Major criticisms include its underrepresentation of rural communities (only 13% of rural census tracts, compared to 26% of suburban and 48% of urban tracts, have been DAC-designated) and failure to account for some key stressors relevant to rural communities (e.g., distance to food stores and in-migration/gentrification).
- Another important note is that wells within DAC communities account for only 10% of all wells within NYS (Table 2). While wells within DAC-designated communities are important to consider, including only DAC wells in an intervention would therefore be very limiting.
- Whereas DAC designation is a binary consideration for an entire census tract, place-based criteria such as this are limited in that any real community comprises a spectrum of socioeconomic status and (dis)advantage.
The inclusion of income-based criteria is useful in that financial strain is a universal indicator of resource constraint which can help to identify the most-in-need across every community. Further, including income-based criteria can widen the program’s eligibility criteria to reach a much greater proportion of well owners (Table 2). Finally, in contrast to the DAC criteria’s binary nature, income thresholds can be adjusted to include greater or fewer wells depending on final budget availability.
- Of the income thresholds evaluated, income <$150,000 is recommended due to its inclusion not only of the greatest number of well owners overall, but also the greatest percentages of wells within disadvantaged and small communities (Table 2). These two considerations are both used by the EPA in awarding grants to states for water infrastructure improvement projects.
- As an alternative to selecting one single income threshold, the state may also consider maximizing cost effectiveness by adopting a tiered rebate system similar to that used by the North Carolina PFAS Treatment System Assistance Program.
Recommendation 3. Alternatives to POETs might be more cost-effective and accessible
A final recommendation is for the state to maximize the breadth of its well remediation program by also offering reimbursements for point-of-use treatment (POUT) systems and for connecting to public water systems, not just for POET installations. While POETs are effective in PFAS removal, they require invasive changes to household plumbing and prohibitively expensive ongoing maintenance, two factors which may give well owners pause even if they are eligible for an initial installation rebate. Colorado’s PFAS TAP program models a less invasive and extremely cost-effective POUT alternative to POETs. We estimate that if NYS were to provide the same POUT filters as Colorado, the total cost of the program (using the recommended eligibility criteria of location within a DAC-designated census tract and/or belonging to a household with annual income <$150,000) would be $163 million, or $33 million per year across 5 years. This amounts to a total decrease in cost of nearly $670 million if POUTs were to be provided in place of POETs. Connection to public water systems, on the other hand, though a significant initial investment, provides an opportunity to streamline drinking water monitoring and remediation moving forward and eliminates the need for ongoing and costly individual interventions and maintenance.
Conclusion
Well testing and rebate programs provide an opportunity to take preventative action against the serious health threats associated with PFAS exposure through private drinking water. Individuals reliant on PFAS-contaminated private wells for drinking water are likely to ingest the chemicals on a daily basis. There is therefore no time to waste in taking action to break this chain of exposure. New York State policymakers are already engaged in developing this policy solution; our recommendations can help both those making the policy and those tasked with implementing it to best serve New Yorkers. Our analysis shows that a program to mitigate PFAS in private drinking water is well within scope of current action and that fair implementation of such a program can help those who need it most and do so in a cost-effective manner.
While the Safe Drinking Water Act regulates the United States’ public drinking water supplies, there is no current federal government to regulate private wells. Most states also lack regulation of private wells. Introducing new legislation to change this would require significant time and political will. Political will to enact such a change is unlikely given resource limitations, concerns around well owners’ privacy, and the current time in which the EPA is prioritizing deregulation.
Decreasing blood serum levels is likely to decrease negative health impacts. Exposure via drinking water is particularly associated with elevated serum PFAS levels, while appropriate water filtration has demonstrated efficacy in reducing serum PFAS levels.
We estimated total costs assuming that 75% of eligible wells are tested for PFAS and that of these tested wells, 25% are both found to have PFAS exceedances and proceed to have filter systems installed. This PFAS exceedance/POET installation rate was selected because it falls between the rates of exceedances observed when private well sampling was conducted in Wisconsin and New Hampshire in recent years.
For states which do not have their own tools for identifying disadvantaged communities, the Social Vulnerability Index developed by the Centers for Disease Control and Prevention (CDC) and Agency for Toxic Substances and Disease Registry (ATSDR) may provide an alternative option to help identify those most in need.
Turning the Heat Up On Disaster Policy: Involving HUD to Protect the Public
This memo addresses HUD’s learning agenda question, “How do the impacts, costs, and resulting needs of slow-onset disasters compare with those of declared disasters, and what are implications for slow-onset disaster declarations, recovery aid programs, and HUD allocation formulas?” We examine this using heat events as our slow-onset disaster, and hurricanes as declared disaster.
Heat disasters, a classic “slow-onset disaster”, result in significant damages, which can exceed damage caused by more commonly declared disasters like hurricanes due to high loss of life from heat. The Federal Housing and Urban Development agency (HUD) can play an important role in heat disasters because most heat-related deaths occur in the home or among those without homes; therefore, the housing sector is a primary lever for public health and safety during extreme heat events. To enhance HUD’s ability to protect the public from extreme heat, we suggest enhancing interagency data collection/sharing to facilitate the federal disaster declarations needed for HUD engagement, working heat mitigation into HUD’s programs, and modifying allocation formulas, especially if a heat disaster is declared.
Challenge and Opportunity
Slow-Onset Disasters Never Declared As Disasters
Slow-onset disasters are defined as events that gradually develop over extended periods of time. Examples of slow-onset events like drought and extreme heat can evolve over weeks, months, or even years. By contrast, sudden-onset disasters like hurricanes, occur within a short and defined timeframe. This classification is used by international bodies such as the United Nations Office for Disaster Risk Reduction (UNDRR) and the International Federation of Red Cross and Red Crescent Societies (IFRC).
HUD’s main disaster programs typically require a federal disaster declaration , making HUD action reliant on action by the Federal Emergency Management Agency (FEMA) under the Stafford Act. However, to our knowledge, no slow-onset disaster has ever received a federal disaster declaration, and this category is not specifically addressed through federal policy.
We focus on heat disasters, a classic slow-onset disaster that has received a lot of attention recently. No heat event has been declared a federal disaster, despite several requests. Notable examples include the 1980 Missouri heat and drought events, the 1995 Chicago heat wave, which caused an estimated 700 direct fatalities, as well as the 2022 California heat dome and concurrent wildfires. For each request, FEMA determined that the events lacked sufficient “severity and magnitude” to qualify for federal assistance. FEMA holds a precedent that declared disasters need to have a discrete and time-bound nature, rather than a prolonged or seasonal atmospheric condition.
“How do the impacts, costs, and resulting needs of slow-onset disasters compare with those of declared disasters?”
Heat causes impacts in the same categories as traditional disasters, including mortality, agriculture, and infrastructure, but the impacts can be harder to measure due to the slow-onset nature. For example, heat-related illness and mortality as recorded in medical records are widely known to be significant underestimates of the true health impacts. The same is likely true across categories.
Sample Impacts
We analyze impacts within categories commonly considered by federal agencies–human mortality, agricultural impacts, infrastructure impacts, and costs for heat, and compare them to counterparts for hurricanes, a classic sudden-onset disaster. Other multi-sectoral reports of heat impacts have been compiled by other entities, including SwissRe and The Atlantic Council Climate Resilience Center.
We identified 3,478 deaths with a cause of “cataclysmic storms” (e.g., hurricanes; International Classification of Disease Code X.37) and 14,461 deaths with a cause of heat (X.30) between 1999-2020 using data from the Centers for Disease Control and Prevention’s (CDC). It is important to note that the CDC database only includes death certificates that list heat as a cause of death, while it is widely recognized that this can be a significant underaccount. However, despite these limitations, CDC remains the most comprehensive national dataset for monitoring mortality trends.
HUD can play an important role in reducing heat mortality. In the 2021 Pacific Northwest Heat Dome, most of the deaths occurred indoors (reportedly 98% in British Columbia) and many in homes without adequate cooling. In hotter Maricopa County, Arizona, in 2024, 49% of all heat deaths were among people experiencing homelessness and 23% occurred in the home. Therefore, across the U.S., HUD programs could be a critical lever in protecting public health and safety by providing housing and ensuring heat-safe housing.
Agricultural Labor
Farmworkers are particularly vulnerable to extreme heat, and housing can be part of a solution to protect them. According to the Environmental Protection Agency (EPA), between 1992 to 2022, 986 workers across industry sectors died from exposure to heat, with agricultural workers being disproportionately affected. According to the Environmental Defense Fund, farmworkers in California are about 20 times more likely to die from heat-related stress, compared to the general population, and they estimate that the average U.S agricultural worker is exposed to 21 working days in the summer growing season that are unsafe due to heat. A study found that the number of unsafe working days due to extreme heat will double by midcentury, increasing occupational health risks and reducing labor productivity in critical sectors. Adequate cooling in the home could help protect outdoor workers by facilitating cooling periods during nonwork hours, another way in which HUD could have a positive impact on heat.
Infrastructure and Vulnerability
Rising temperatures significantly increase energy demand, particularly due to the widespread reliance on air conditioning. This surge in demand increases the risk of power outages during heat events, exacerbating public health risks due to potential grid failure. In urban areas, the built environment can add heat, while in rural areas residents are at greater risk due to the lack of infrastructure. This effect contributes to increased cooling costs and worsens air quality, compounding health vulnerabilities in low-income and urban populations. All of these impacts are areas where HUD could improve the situation through facilitating and encouraging energy-efficient homes and cooling infrastructure.
Costs
In all categories we examined, estimates of U.S.-wide costs due to extreme heat rivaled or exceeded costs of hurricanes. For mortality, the estimated economic impact of mortality (scaled by value of statistical life, VSL = $11.6 million) caused by extreme heat reached $168 billion, significantly exceeding the $40.3 billion in VSL losses from hurricanes during the same period. Infrastructure costs further reflect this imbalance. Extreme heat resulted in an estimated $100 billion in productivity loss in 2024 alone, with over 60% of U.S. counties currently experiencing reduced economic output due to heat-related labor stress. Meanwhile, Hurricanes Helene and Milton together generated $113 billion in damage during the 2024 Atlantic hurricane season.
Crop damage reveals the disproportionate toll of heat and drought, with 2024 seeing $11 billion in heat/drought impacts compared to $6.8 billion from hurricanes. The dairy industry experiences a substantial recurring burden from extreme heat, with annual losses of $1.5 billion attributed to heat-induced declines in production, reproduction, and livestock fatalities. Broader economic impacts from heat-related droughts are severe, including $14.5 billion in combined damages from the 2023 Southern and Midwestern drought and heatwave, and $22.1 billion from the 2022 Central and Eastern heat events. Comparatively, Hurricane Helene and Hurricane Milton produced $78.7 billion and $34.3 billion in damages, respectively. Extreme heat and drought exert long-term, widespread, and escalating economic pressures across public health, agriculture, energy, and infrastructure sectors. A reassessment of federal disaster frameworks is necessary to appropriately prioritize and allocate funds for heat-related resilience and response efforts.
Resulting Needs
Public Health and Medical Care: Immediate care and resources for heat stroke and exhaustion, dehydration, and respiratory issues are key to prevent deaths from heat exposure. Vulnerable populations including children, elderly, and unhoused are particularly at risk. There is an increased need for emergency medical services and access to cooling centers to prevent the exacerbation of heat stress and to prevent fatalities.
Cooling and Shelter: Communities require access to public cooling centers and for air conditioning. Clean water supply is also essential to maintain health.
Infrastructure and Repair: The use of air conditioning increases energy consumption, leading to power outages. Updated infrastructure is essential to handle demand and prevent blackouts. Building materials need to include heat-resistant materials to reduce Urban Heat Island effects.
Emergency Response Capacity: Emergency management systems need to be strengthened in order to issue early warnings, produce evacuation plans, and mobilize cooling centers and medical services. Reliable communication systems that provide real-time updates with heat index and health impacts will be key to improve community preparedness.
Financial Support and Insurance Coverage: Agricultural, construction, and service workers are populations which are vulnerable to heat events. Loss of income may occur as temperatures rise, and compensation must be given.
Social Support and Community Services: There is an increasing need for targeted services for the elderly, unhoused, and low-income communities. Outreach programs, delivery of cooling resources, and shelter options must be communicated and functional in order to reduce mortality. Resilience across these sectors will be improved as data definitions and methods are standardized, and when allocations of funding specifically for heat increase.
“What are implications for slow-onset disaster declarations, recovery aid programs, and HUD allocation formulas?”
Slow-onset disaster declarations
No heat event–or to our knowledge or other slow-onset disaster–has been declared a disaster under the Stafford Act, the primary legal authority for the federal government to provide disaster assistance. The statute defines a “major disaster” as “any natural catastrophe… which in the determination of the President causes damage of sufficient severity and magnitude to warrant major disaster assistance to supplement the efforts and available resources of States, local governments, and disaster relief organizations in alleviating the damage, loss, hardship, or suffering caused thereby.” Though advocacy organizations have claimed that the reason for the lack of disaster declaration is because the Stafford Act omits heat, FEMA’s position is that amendment is unnecessary and that a heat disaster could be declared if state and local needs exceed their capacity during a heat event. This claim is credible, as the COVID-19 pandemic was declared a disaster without explicit mention in the Stafford Act.
Though FEMA’s official position has been openness to supporting an extreme-heat disaster declaration, the fact remains that none has been declared. There is opportunity to improve processes to enable future heat declarations, especially as heat waves affect more people more severely for more time. The Congressional Research Service suggests that much of the difficulty might stem from FEMA regulations focusing on assessment of uninsured losses makes it less likely that FEMA will recommend that the President declare a disaster. Heat events can be hard to pin down with defined time periods and locations, and the damage is often to health and other impacts that are slow to be quantified. Therefore, real-time monitoring systems that quantify multi-sectoral damage could be deployed to provide the information needed. Such systems have been designed for extreme heat, and similar systems are being tested for wildfire smoke–these systems could rapidly be put into use.
The U.S. Department of Housing and Urban Development (HUD) plays a critical role in long-term disaster recovery, primarily by providing housing assistance and funding for community development initiatives (see table above). However, HUD’s ability to deploy emergency support is contingent upon disaster declaration under the Stafford Act and/or FEMA activation. This restriction limits HUD’s capacity to implement timely interventions, such as retrofitting public housing with cooling systems or providing emergency housing relief during extreme heat events.
Without formal recognition of a heat event as a disaster, HUD remains constrained in its ability to deliver rapid and targeted support to vulnerable populations facing escalating risks from extreme temperatures. Without declared heat disasters, the options for HUD engagement hinge on either modifying program requirements or supporting the policy and practice needed to enable heat disaster declarations.
HUD Allocation Formulas
Congress provides funding through supplemental appropriations to HUD following major disasters, and HUD determines how best to distribute funding based on disaster impact data. The calculations are typically based on Individual and Public Assistance data from FEMA, verified loss data from the Small Business Administration (SBA), claims from insurance programs such as the National Flood Insurance Program (NFIP), and housing and demographic data from the U.S Census Bureau and American Community Survey. CDBG-DR and CDBG-MIT typically require that at least 70% and 50% of funds benefit low and moderate income (LMI) communities respectively. Funding is limited to areas where there has been a presidentially declared disaster.
For example, the Disaster Relief Supplemental Appropriations Act, 2025 (approved on 12/21/2024) appropriated $12.039 billion for CDBG-Disaster Recovery funds (CDBG-DR) for disasters “that occurred in 2023 or 2024.” HUD focused its funding on areas with the most serious and concentrated unmet housing needs from within areas that experienced a declared disaster within the time frame. Data used to determine the severity of unmet housing needs included FEMA and SBA inspections of damaged homes; these data were used in a HUD formula.
Opportunities exist to adjust allocation formulas to be more responsive to extreme heat, especially if CDBG is activated for a heat disaster. For example, HUD is directed to use the funds “in the most impacted and distressed areas,” which it could interpret to include housing stock that is unlikely to protect occupants from heat.
Gaps
Extreme heat presents multifaceted challenges across public health, infrastructure, and agriculture, necessitating a coordinated and comprehensive federal response. The underlying gap is the lack of any precedent for declaring an extreme-heat disaster; without such a declaration, numerous disaster-related programs in HUD, FEMA, and other federal agencies cannot be activated. Furthermore, likely because of this underlying gap, disaster-related programs have not focused on protecting public health and safety from extreme heat despite its large and growing impact.
Plan of Action
Recommendation 1. Improve data collection and sharing to enable disaster declarations.
Because lack of real-time, quantitative data of the type most commonly used by disaster declarations (i.e., uninsured losses; mortality) is likely a key hindrance to heat-disaster declarations, processes should be put in place to rapidly collect and share this data.
Health impacts could be tracked most easily by the CDC using the existing National Syndromic Surveillance System and by expanding the existing influenza-burden methodology, and by the National Highway Traffic Safety Association’s Emergency Medical Services Activation Surveillance Dashboard. To get real-time estimates of mortality, simple tools can be built that estimate mortality based on prior heatwaves; such tools are already being tested for wildfire smoke mortality. Tools like this use weather data as inputs and mortality as outputs, so many agencies could implement–NOAA, CDC, FEMA, and EPA are all potential hosts. Additional systems need to be developed to track other impacts in real time, including agricultural losses, productivity losses, and infrastructure damage.
To facilitate data sharing that might be necessary to develop some of the above tools, we envision a standardized national heat disaster framework modeled after the NIH Data Management and Sharing (DMS) policy. By establishing consistent definitions and data collection methods across health, infrastructure, and socioeconomic sectors, this approach would create a foundation for reliable, cross-sectoral coordination and evidence-based interventions. Open and timely access to data would empower decision-makers at all levels of government, while ethical protections—such as informed consent, data anonymization, and compliance with HIPAA and GDPR—would safeguard individual privacy. Prioritizing community engagement ensures that data collection reflects lived experiences and disparities, ultimately driving equitable, climate-resilient policies to reduce the disproportionate burden of heat disasters.
While HUD or any other agency could lead the collaboration, much of the National Integrated Heat Health Information System (NIHHIS) partnership (HUD is a participant) is already set up to support data-sharing and new tools. NIHHIS is a partner network between many federal agencies and therefore has already started the difficult work of cross-agency collaboration. Existing partnerships and tools can be leveraged to rapidly provide needed information and collaboration, especially to develop real-time quantification of heat-event impacts that would facilitate declaration of heat disasters. Shifting agency priorities have reduced NIHHIS partnerships recently; these should be strengthened, potentially through Congressional action.
Recommendation 2. Incorporate heat mitigation throughout HUD programs
Because housing can play such an important role in heat health (e.g., almost all mortality from the 2021 Heat Dome in British Columbia occurred in the home; most of Maricopa County’s heat mortality is either among the unhoused or in the home), HUD’s extensive programs are in a strong position to protect health and life safety during extreme heat. Spurring resident protection could include gentle behavioral nudges to grant recipients, such as publishing guidance on regionally tailored heat protections for both new construction and retrofits. Because using CDBG funds for extreme heat is uncommon, HUD should publish guidance on how to align heat-related projects with CDBG requirements or how to incorporate heat-related mitigation into projects that have a different focus. In particular, it would be important to provide guidance on how extreme heat related activities meet National Objectives, as required by authorizing legislation.
HUD could also take a more active role, such as incentivizing or requiring heat-ready housing across their other programs, or even setting aside specific amounts of funds for this hazard. The active provision of funding would be facilitated by heat disaster declarations, so until that occurs it is likely that the facilitation guides suggested above are likely the best course of action.
HUD also has a role outside of disaster-related programs. For example, current HUD policy requires residents in Public Housing Agency (PHA) managed buildings to request funding relief to avoid surcharges from heavy use of air conditioning during heat waves; policy could be changed to proactively initiate that relief from HUD. In 2024, Principal Deputy Assistant Secretary Richard Monocchio sent a note encouraging broad thinking to support residents through extreme heat, and such encouragement can be supported with agency action. While this surcharge might seem minor, ability to run air conditioning is key for protecting health, as many indoor heat deaths across Arizona to British Columbia occurred in homes that had air conditioning but it was off.
Recommendation 3. HUD Allocation Formula: Inclusion of Vulnerability Variables
When HUD is able to launch programs focused on extreme heat, likely only following an officially declared heat disaster, HUD allocation formulas should take into account heat-specific variables. This could include areas where heat mortality was highest, or, to enhance mitigation impact, areas with higher concentrations of at-risk individuals (older adults, children, individuals with chronic illness, pregnant people, low-income households, communities of color, individuals experiencing houselessness, and outdoor workers) at-risk infrastructure (older buildings, mobile homes, heat islands). By integrating heat-related vulnerability indicators in allocations formulas, HUD would make the biggest impact on the heat hazard.
Conclusion
Extreme heat is one of the most damaging and economically disruptive threats in the United States, yet it remains insufficiently recognized in federal disaster frameworks. HUD is an agency positioned to make the biggest impact on heat because housing is a key factor for mortality. However, strong intervention across HUD and other agencies is held back by lack of federal disaster declarations for heat. HUD can work together with its partner agencies to address this and other gaps, and thereby protect public health and safety.