Analytical Literacy First: A Prerequisite for AI, Data, and Digital Fluency
As digital technologies reshape every aspect of society, students must be equipped and proficient in not only specialized literacies (such as digital literacy, data literacy, and AI literacy), but with a foundational skill set that allows them to think critically, reason logically, and solve problems effectively. Analytical literacy is the scaffolding upon which more specialized literacies are built. Students in the 21st century need strong critical thinking skills like reasoning, questioning, and problem-solving, before they can meaningfully engage with more advanced domains like digital, data, or AI literacy. Without these skills, students may struggle to engage critically with the technologies shaping their lives. We urge education leaders at the federal, state, and institutional levels to prioritize development of analytical literacy by incentivizing integration across disciplines, aligning standards, and investing in research and professional development.
Introduction
As society becomes increasingly shaped by digital technologies, data-driven decision-making, and artificial intelligence, the ability to think analytically is no longer optional, it’s essential. While digital, data, and AI literacies focus on domain-specific skills, analytical literacy enables students to engage with these domains critically and ethically. Analytical literacy encompasses critical thinking, logical reasoning, and problem-solving, and equips students to interpret complex information, evaluate claims, and make informed decisions. These skills are foundational not only for academic success but for civic engagement and workforce readiness in the 21st century.
Despite its importance, analytical literacy remains unevenly emphasized in K–12 education. These disparities are often driven by systemic inequities in school funding, infrastructure, and access to qualified educators. According to NCES’s Education Across America report, rural schools and those in under-resourced communities frequently lack the professional development opportunities, instructional materials, and technology needed to support analytical skill-building. In contrast, urban and well-funded districts are more likely to offer inquiry-based curricula, interdisciplinary projects, and formative assessment tools that foster deep thinking. Additionally, while some schools integrate analytical thinking through inquiry-based learning, project-based instruction, or interdisciplinary STEM curricula, there is no consistent national framework guiding its development at this time. Instructional strategies vary widely by state or district, and standardized assessments often prioritize procedural fluency over deeper cognitive engagement like analytical reasoning.
Recent research underscores the urgency of this issue. A 2024 literature review from the Center for Assessment highlights analytical thinking as a core competency for future success, noting its role in supporting other 21st-century skills such as creativity, collaboration, and digital fluency. Similarly, a systematic review published in the International Journal of STEM Education emphasizes the need for early engagement with analytical and statistical thinking to prepare students for a data-rich society.
There is growing consensus among educators, researchers, and policy advocates that analytical literacy deserves a more central role in K–12 education. Organizations such as NWEA and Code.org have called for stronger integration of analytical and data literacy skills into curriculum and professional development efforts. However, without coordinated policy action, these efforts remain fragmented.
This memo builds on that emerging momentum. It argues that analytical literacy should be treated as a skill that underpins students’ ability to engage meaningfully with digital, data, and AI literacies. By elevating analytical literacy through standards, instruction, and investment, we can ensure that all students are prepared to participate, innovate, and thrive in a complex and rapidly changing world.
To understand why analytical literacy must be prioritized, we examine the current landscape of specialized literacies and the foundational skills they require.
Challenges and Opportunities
In today’s interconnected world, digital literacy, data literacy, and AI literacy are no longer optional, they are essential skill sets for civic participation, economic mobility, and ethical decision-making. These literacies enable students to navigate online environments, interpret complex datasets, and engage thoughtfully with emerging technologies.
- Digital literacy encompasses the ability to use technology effectively and critically, including evaluating online information, understanding digital safety, and engaging ethically in digital environments.
- Data Literacy involves the capacity to understand, interpret, evaluate, and communicate data. This includes recognizing data sources, identifying patterns, and drawing informed conclusions.
- AI Literacy entails understanding the basic concepts of artificial intelligence, its applications, ethical implications, and how to interact with AI systems responsibly.
Together, these literacies form a cognitive toolkit that empowers students to be not just consumers of information and technology, but thoughtful participants in civic and digital life.
While these literacies address specific domains, they all fundamentally rely on what should be called Analytical Literacy. Analytical literacy, at its core, involves the ability to:
- Ask insightful questions. Identifying the core issues and seeking relevant information.
- Evaluate information critically. Assessing the credibility, bias, and relevance of sources.
- Identify patterns and relationships. Recognizing connections and trends in complex information.
- Reason logically. Constructing sound arguments and drawing valid inferences.
- Solve problems effectively. Applying analytical skills to find solutions and make informed decisions.
Yet, without structured development of these foundational skills, students risk becoming passive consumers of technology rather than active, informed participants. This presents an urgent opportunity: by centering Analytical Literacy in standards and assessment, instruction, and professional learning, we can create enduring pathways for students to participate, innovate, and thrive in an increasingly data-driven world.
Examples of implementation must include:
- In Standards and Assessment. States should revise academic standards to include grade-level expectations for analytical reasoning across disciplines. For example, middle school science standards might require students to construct evidence-based arguments using data, while high school civics assessments could include open-ended questions that ask students to evaluate competing claims in news media.
- In Instruction. Teachers should embed analytical skill development into daily practice through inquiry-based learning, Socratic seminars, or interdisciplinary projects. A math teacher could guide students in analyzing real-world datasets to identify trends and make predictions, while an English teacher might use argument mapping to help students deconstruct persuasive texts.
- In Professional Learning. Districts should offer workshops that train educators to use formative assessment strategies that surface student reasoning such as think-alouds, peer critiques, or performance tasks. Coaching cycles should focus on how to scaffold questioning techniques that push students beyond recall toward deeper analysis.
By embedding these practices systemically, we move from episodic exposure to analytical thinking toward a coherent, equitable framework that prepares all students for the demands of the digital age.
Addressing these gaps requires coordinated action across multiple levels of the education system. The following plan outlines targeted strategies for federal, state, and institutional leaders.
Plan of Action
To strengthen analytical literacy in K–12 education, we recommend targeted efforts from three federal offices, supported by state agencies, educational organizations, and teacher preparation programs.
Recommendation 1. Federal Offices
Federal agencies have the capacity to set national priorities, fund innovation, and coordinate cross-sector efforts. Their leadership is essential to catalyzing systemic change. For example:
White House Office of Science and Technology Policy (OSTP)
OSTP now chairs the newly established White House Task Force on Artificial Intelligence Education, per the April 2025 Executive Order on Advancing AI Education. This task force is charged with coordinating federal efforts to promote AI literacy and proficiency across the K–12 continuum. We recommend that OSTP:
- Expand the scope of the Task Force to explicitly include analytical literacy as a foundational competency for AI readiness.
- Ensure that public-private partnerships and instructional resources developed under the order emphasize reasoned decision-making as a core component, not just technical fluency.
- Use the Presidential Artificial Intelligence Challenge as a platform to showcase interdisciplinary student work that demonstrates analytical thinking applied to real-world AI problems.
This alignment would ensure that analytical literacy is not treated as an adjacent concern, but as a central pillar of the federal AI education strategy.
Institute of Education Sciences (IES)
IES should coordinate closely with the Task Force to support the Executive Order’s goals through a National Analytical Literacy Research Agenda. This agenda could:
- Fund studies that explore how analytical thinking supports AI literacy across grade levels.
- Evaluate the effectiveness of instructional models that integrate analytical reasoning into AI and computer science curricula.
- Develop scalable tools and assessments that measure students’ analytical readiness for AI-related learning pathways.
IES could also serve as a technical advisor to the Task Force, ensuring that its initiatives are grounded in evidence-based practice.
Office of Elementary and Secondary Education (OESE)
In light of the Executive Order’s directive for educator training and curriculum innovation, OESE should:
Prioritize analytical literacy integration in discretionary grant programs that support AI education.
Develop guidance for states on embedding analytical competencies into AI-related standards and instructional frameworks.
Collaborate with the Task Force to ensure that professional development efforts include training on how to teach analytical thinking—not just how to use AI tools.
National Science Foundation (NSF)
The National Science Foundation plays a pivotal role in advancing STEM education through research, innovation, and capacity-building. To support the goals of the Executive Order and strengthen analytical literacy as a foundation for AI readiness, we recommend that NSF:
- Establish a dedicated grant program focused on developing and scaling instructional models that integrate analytical literacy into STEM and AI education. This could include interdisciplinary curricula, project-based learning frameworks, and performance-based assessments that emphasize reasoning, problem-solving, and data interpretation.
- Fund research-practice partnerships that explore how analytical thinking develops across grade levels and how it supports students’ engagement with AI concepts. These partnerships could include school districts, universities, and professional organizations working collaboratively to design and evaluate scalable models.
- Support educator capacity-building initiatives, such as fellowships or professional learning networks, that equip teachers to foster analytical literacy in STEM classrooms. This aligns with NSF’s recent Dear Colleague Letters on expanding K–12 resources for AI education.
- Invest in technology-enhanced learning tools that provide real-time feedback on student reasoning and support formative assessment of analytical skills. These tools could be piloted in diverse school settings to ensure equity and scalability.
By positioning analytical literacy as a research and innovation priority, NSF can help ensure that K–12 students are not only technically proficient but cognitively prepared to engage with emerging technologies in thoughtful, ethical, and creative ways.
Note: Given the evolving organizational landscape within the U.S. Department of Education—including the elimination of offices like Educational Technology—it is critical to identify stable federal anchors. The agencies named above have longstanding mandates tied to research, policy innovation, and K–12 support, making them well-positioned to advance this work.
Recommendation 2. State Education Policymakers
While federal agencies can provide vision and resources, states hold the levers of implementation. Their role is critical in translating policy into classroom practice.
While federal agencies can provide strategic direction and funding, the implementation of analytical literacy must be led by states. Each state has the authority—and responsibility—to shape standards, assessments, and professional development systems that reflect local priorities and student needs. To advance analytical literacy meaningfully, we recommend the following actions:
Elevate Analytical Literacy in Academic Standards
States should conduct curriculum audits to identify where analytical skills are currently embedded—and where gaps exist. This process should inform the revision of academic standards across disciplines, ensuring that analytical literacy is treated as a foundational competency, not an ancillary skill. California’s ELA/ELD Framework, for example, emphasizes inquiry, argumentation, and evidence-based reasoning across subjects—not just in English language arts. Similarly, the History–Social Science Framework promotes critical thinking and source evaluation as core civic skills.
States can build on these models by:
- Developing cross-disciplinary analytical literacy frameworks that guide integration from elementary through high school.
- Embedding analytical competencies into STEM, humanities, and career technical education standards.
- Aligning revisions with the goals of the Executive Order, which calls for foundational skill-building to support digital and AI literacy.
Invest in Professional Development and Instructional Capacity
States should fund and scale professional learning ecosystems that equip educators to teach analytical thinking explicitly. This includes:
- Training on inquiry-based learning, Socratic dialogue, and formative assessment strategies that surface student reasoning.
- Development of microcredential pathways for educators to demonstrate expertise in fostering analytical literacy across content areas.
- Support for instructional coaches and teacher leaders to model analytical practices and mentor peers.
California’s professional learning modules aligned to the Common Core State Standards and ELA/ELD frameworks offer a useful starting point for designing scalable, standards-aligned training.
Redesign Student Assessments to Capture Deeper Thinking
States should move beyond traditional standardized tests and invest in assessment systems that measure analytical reasoning authentically. States can catalyze this innovation by issuing targeted Requests for Proposals (RFPs) that invite districts, assessment developers, and research-practice partnerships to design and pilot new models of assessment aligned to analytical literacy. These RFPs should prioritize:
- Performance tasks that require students to analyze real-world problems and propose solutions.
- Portfolio assessments that document students’ growth in reasoning and problem-solving over time.
- Open-ended questions that ask students to evaluate claims, synthesize evidence, and construct logical arguments.
- Scalable models that can inform statewide systems over time.
By using the RFP process strategically, states can surface promising practices, support local innovation, and build a portfolio of assessment approaches that reflect the complexity of students’ analytical capabilities.
Recommendation 3. Professional Education Organizations
Beyond government, professional education organizations shape the field through resources, advocacy, and collaboration. They are key partners in scaling analytical literacy.
Professional education organizations play a vital role in shaping the landscape of K–12 education. These groups—ranging from subject-specific associations like the National Council of Teachers of English (NCTE) and the National Science Teaching Association (NSTA), to broader coalitions like ASCD and the National Education Association (NEA)—serve as hubs for professional learning, policy advocacy, resource development, and field-wide collaboration. They influence classroom practice, inform state and federal policy, and support educators through research-based guidance and community-building.
Because these organizations operate at the intersection of practice, policy, and research, they are uniquely positioned to champion analytical literacy as a foundational skill across disciplines. To advance this work, we recommend the following actions:
- Develop Flexible, Discipline-Specific Resources. Create adaptable instructional materials—such as lesson plans, assessment templates, and classroom protocols—that help educators integrate analytical thinking into diverse subject areas. For example, NCTE could develop resources that support argument mapping in English classrooms, while NSTA might offer tools for teaching evidence-based reasoning in science labs.
- Advocate for Analytical Literacy as a National Priority. Publish position papers, host public events, and build strategic partnerships that elevate analytical literacy as essential to digital and civic readiness. Organizations can align their advocacy with the federal directive for AI education, emphasizing the role of analytical thinking in preparing students for ethical and informed engagement with emerging technologies.
- Foster Cross-Sector Collaboration. Convene working groups, research-practice partnerships, and educator networks to share best practices and scale effective models. For example, AERA could facilitate studies on how analytical literacy develops across grade levels, while CoSN might explore how digital tools can support real-time feedback on student reasoning.
By leveraging their convening power, subject-matter expertise, and national reach, professional education organizations can accelerate the adoption of analytical literacy and ensure it is embedded meaningfully into the fabric of K–12 education.
Recommendation 4. Teacher Preparation Programs
To sustain long-term change, we must begin with those entering the profession. Teacher preparation programs are the foundation for instructional capacity and must evolve to meet this moment.
Teacher preparation programs (TPPs) are the gateway to the teaching profession. Housed in colleges, universities, and alternative certification pathways, these programs are responsible for equipping future educators with the knowledge, skills, and dispositions needed to support student learning. Their influence is profound: research consistently shows that well-prepared teachers are the most important in-school factor for student success.
Yet many TPPs face persistent challenges. Too often, graduates report feeling underprepared for the realities of diverse, data-rich classrooms. Coursework may emphasize theory over practice, and clinical experiences vary widely in quality. Critically, few programs offer explicit training in how to foster analytical literacy—despite its centrality to digital, data, and AI readiness. In response to national calls for foundational skill-building and educator capacity, TPPs must evolve to meet this moment.
While federal funding for teacher preparation has become more limited, states are stepping in through innovative models like teacher residencies, registered apprenticeships, and microcredentialing pathways. These initiatives are often supported by modified use of Title II funds, state general funds, and workforce development grants. To accelerate this momentum, federal programs like Teacher Quality Partnership (TQP) grants and Supporting Effective Educator Development (SEED) grants could be adapted to prioritize analytical literacy, while states can issue targeted RFPs to redesign coursework, practicum experiences, and capstone projects that center reasoning, problem-solving, and ethical decision-making. To ensure that new teachers are ready to cultivate analytical thinking in their students, we recommend the following actions:
- Integrate Analytical Pedagogy into Coursework and Practicum. Embed instructional strategies that center analytical literacy into pre-service coursework. This includes training in inquiry-based learning, argumentation, and data interpretation. Practicum experiences should reinforce these strategies through guided observation and practice in real classrooms.
- Ensure Faculty Model Analytical Thinking. Faculty must demonstrate analytical reasoning in their own teaching—whether through modeling how to deconstruct complex texts, facilitating structured debates, or using data to inform instructional decisions. This modeling helps pre-service teachers internalize analytical habits of mind.
- Strengthen Field Placements for Analytical Instruction. Partner with districts to place candidates in classrooms where analytical literacy is actively taught. Provide structured mentorship from veteran teachers who use questioning techniques, performance tasks, and formative assessments to surface student reasoning.
- Develop Capstone Projects Focused on Analytical Literacy. Require candidates to complete a culminating project that demonstrates their ability to design, implement, and assess instruction that builds students’ analytical skills. These projects could be aligned with state standards and local district priorities.
- Align Program Outcomes with Emerging Policy Priorities. Ensure that program goals reflect the competencies outlined in federal initiatives like the AI Education Executive Order. This includes preparing teachers to support foundational literacies that enable students to engage critically with digital and AI technologies.
Together, these actions form a coherent strategy for embedding analytical literacy across the K–12 continuum. But success depends on bold leadership and sustained commitment. By reimagining teacher preparation through the lens of analytical literacy, we can ensure that every new educator enters the classroom equipped to foster deep thinking, ethical reasoning, and problem-solving—skills that students need to thrive in a complex and rapidly changing world.
Conclusion
Analytical literacy is not a nice-to-have, it is a prerequisite for the specialized proficiencies students need in today’s complex world. By embedding critical thinking, logical reasoning, and problem-solving across the K–12 continuum, we empower students to meet challenges with curiosity and discernment. We urge policymakers, educators, and institutions to act boldly by demanding analytical literacy be established as a cornerstone of 21st-century education. and co-create a future where every student has the analytical tools essential for meaningful participation, innovative thinking, and long-term success in the digital age and beyond.
Improving Standardized Test Score Reporting and Administration for Students, Caregivers, and Educators
Currently, standardized testing is a necessary but often time-consuming process that is used to measure educational progress to improve educational outcomes and curricula; however, the immediate consumers of standardized tests are educators, students, and their caregivers who do not typically receive detailed information in exchange for the time spent studying for and taking these exams. This brief proposes reforming standardized test score reporting to improve achievement level labeling using more strengths-based language as well as to provide actionable feedback and personalized resources. This brief also proposes actionable steps to achieve this by one, increasing the number of test administrations to increase progress monitoring and adjustment opportunities before the end of the school year and two, provide educators with detailed information in the form of dashboards and ready resources
Introduction
Standardized tests are ubiquitous in K-12 education in the United States. In fact, the average student spends 20-25 hours or more during the school year just taking standardized tests. In addition, the scores from these tests have high consequences for students and their educators, including promotion to the next grade or measurement of teacher quality. However, the data shows that test scores have largely stagnated since the 2002 passage of No Child Left Behind, which introduced the nationwide requirement to implement high-stakes testing.
Due to the high stakes nature of these assessments, many educators feel that they have to adjust their curriculum and instruction to best suit material they believe will be on the test. This is sometimes to the detriment of teaching other skills or performing activities that might be more cognitively challenging or engaging. There is also limited information about the specifics of what is on the test, due to the proprietary nature of the questions and tasks on these exams. Many educators do not feel confident that they have thoroughly taught all of the material on the accountability exam. Finally, parents and guardians also have difficulty understanding score reports. Regrettably, test results are often delivered between school years, after any potential tutoring or support could be delivered.
To address these challenges, we need to overhaul the standardized testing and score reporting system to be more accessible to all of the end users of standardized tests: educators, students, and their families. This is especially important at a time when more universities are becoming test-optional and more families are choosing to opt their K-12 students out of summative standardized testing. Additionally, this poses a potential existential threat to testing publishers and also necessitates that the system adjust to the needs of the public.
Improving Standardized Testing Score Reporting For Students and Guardians
The core group most impacted by standardized score reporting are students and their caregivers. Most families receive score reports that provide very high-level information about students’ performance, such as their overall test score relative to other students in their state and district. Some score reports provide slightly more information, but they are typically not specific to the individual student’s strengths and areas for growth. Nor do they provide actionable feedback or resources for how the student can improve low scoring sections or extend learning for areas of strength. Additionally, these reports do not include accompanying information, so it is very difficult for a parent/guardian to extrapolate from the score report specific areas to support or extend their child’s learning.
Standardized test reports have a damaging effect on students’ academic identity and self-esteem. Many of these score reports use labels to describe achievement levels and more care should be embedded in this language. Results are sometimes described with labels like “below proficient,” which have been found to be damaging to students’ self-perception of their academic performance and ability to improve. Even slight changes to labels like the inclusion of the word ‘yet’ in “not yet meeting expectations” were found to be more encouraging than deficit-based labeling. Finally, many of these reports are not designed to be accessible for a wide range of disabilities or non-English languages, nor do they explain where the scores originated or how to apply them, which limits the number of students and caregivers who can access these reports or use their information to improve educational outcomes.
The solution for students is to redesign the score reports so that they are more actionable and positively framed in their achievement labels. Scoring should especially highlight what the student does well and frame the areas where the student needs support using growth mindset-facing language rather than deficit-based language. Additionally, these reports should provide resources and recommendations to remediate areas that still need improvement and to extend learning for learning domains that the student has already mastered.
Improving Score Reporting and Data Analysis For Educators
The secondary group impacted by standardized testing score reporting are educators. Depending on the state or district, educators typically receive a general summative report about their incoming students’ performance on last school year’s standardized testing as well as a report about their previous students’ performance, especially as it applies to measures of teacher quality. Depending on the state/district, this report tends to have slightly more granular information than the student-facing reports. However, this still does not provide detailed information on the specific skills where each student needs additional support. Moreover, even if the educator receives detailed standards/objectives that each student missed on the previous year’s exam, they do not receive specific information on how to take that report and use it to remediate skills into their current curriculum. There is also frequently no time in the school year to remediate the skills that still need to be learned from the previous year nor planning time for educators to adjust current grade level curricula to allow for robust remediation. Finally, when educators receive these reports over the summer or in the early fall, this is far too late for the educator to adjust their instruction and support their learners during the school year. Ideally, teachers would have interim progress reports during the school year so that they could address and support existing issues while they are still teaching.
To support educators, the score reporting process needs an additional component that helps educators translate score reports into actionable pedagogy that blends with current grade level curricula. This should include diverse programs of support for the different patterns of skills and types of students. Additionally, there should be a mid-year process for collecting and reporting data. This way, educators can support struggling students before the accountability test at the end of the year. This could mean that the testing system shifts such that the more summative assessment is offered at the middle of the school year; or it could mean distributing the average 2-3 day end-of-year assessment days throughout the school year, to not increase the number of days being spent testing.
Areas for Improvement
A three-pronged approach would greatly improve the testing system. The first is to create more in-depth score reports that are more actionable. The second is to create skill-based dashboards that teachers can access to support remediation in real time. Thirdly, the long-term plan for this work would be to distribute accountability testing across the school year so that there are more opportunities for catching students who struggle before the school year is over. Taken together, this approach provides a starting point for improving the testing system for educators, students, and their communities.
More Actionable Score Reporting That Includes Resources
The first, most easily fixed issue is to support an improved score reporting system that is more detailed and actionable for students and their caregivers. Instead of just a scale score and the overall achievement level that the student has attained, a revamped score would include more detailed feedback about the student’s strengths and areas for improvement. For example, these reports could include links or attachments to additional open educational resources recommended by that state/district to help improve those skills. These reports should also include information about what the student does well and provide resources or recommendations for how to extend or further develop those skills as well.
To create enhanced score reports, there needs to be a larger consensus about a score report’s basic guidelines. The Standards for Educational and Psychological Testing can assist with basic score reports; however, these recommendations do not include any information about using score reporting as a method for supporting student learning or fully capturing student learning. As part of this process, the larger organizational, state, and federal educational regulatory bodies must decide on a set of guidelines for score reporting. As a starting point, groups like State and District-Level Education Associations, as well as the Association of Test Publishers, could develop resources describing best practices.
One potential method for funding these innovative changes to testing would be for the states who were interested in overhauling their system to apply for an Innovative Assessment Demonstration Authority Grant; however, this might need adjustment to fund the proposed policy adjustment. Additionally, standardized testing represents a large expenditure for each state. Each state’s Department of Education could decide to use a “Pay for Success” approach in which each state or large district sets outcome measurements for how they would like their testing program to be adjusted to be more usable for students and educators and then only deliver on their procurement decision if these measures are met.
More Checkpoints, Fewer Stakes
More radically, the accountability system needs to shift to one in which huge decisions about student ability and teacher quality are not just down to one test that takes place one time a year that is not directly related to the students’ context or immediate knowledge. There are a variety of different reasons why that test would not reflect the true ability of students; everything from illness or anxiety to certain types of disability that are not compatible with one-time assessment. Instead, smaller, lower-stakes assessments should be offered more frequently throughout the school year so that misconceptions and gaps in knowledge can be addressed more effectively and responsively. One model that was piloted in Louisiana was to offer three smaller exams across the school year that were aligned to the English Language Arts (ELA) curriculum instead of reading unrelated, decontextualized passages. This approach is fairer because it removes the impact of students’ background knowledge and is a truer measure of students’ learning from that year’s curriculum. Additionally, many schools have switched to using the Star assessment system, which is a computer-adaptive test that can be administered multiple times throughout the year to support frequent checks on student learning. The Star assessment system also provides more detailed progress monitoring for measuring student learning than one summative assessment.
Alternatively, instead of standardized assessment, Performance-Based Learning is another way to capture what students have learned throughout the whole school year. This model involves project-based assessments that lead to a summative portfolio review that determines graduation/retention criteria instead of reliance on external standardized assessments. This system is lower stakes for students because it gives them the whole school year to demonstrate their knowledge and mastery of the curricular standards. It also provides a lot more agency for students and educators to scaffold and support students in demonstrating their knowledge in much more diverse ways. The largest difficulty with this process would be state and nation-wide consensus on what these systems look like and how to ensure consistency between grade level projects and accountability.
Dashboards for Teachers
In addition to improved score reporting, educators should receive a more detailed set of skills and their current students’ progress in each of these. This report should also be tied in the current curriculum that the teacher is using. This way, the system can recommend lessons and materials to remediate the skills students struggle with as well as to provide extension for areas where students are already proficient. An example of this that was found to work was an increase in math performance in Maine using the ASSISTments platform to tie specific content to targeted student homework practice. Classroom time is precious. Knowing more about the specific lessons that are needed to best support growth and achievement is paramount to improving student learning. The decision about what to include in the dashboard would initially be left to the discretion of the curriculum/test makers; however, this would also likely need to be decided based on best practices as well.
Conclusion
The current state of the field for standardized testing is very unidirectional – students take summative assessments and these scores are used to make judgements about students, teachers, and funding for their schools. Despite spending a large amount of classroom time just taking the exams, not including all of the test preparation that goes into sitting for those exams, there is very little direct benefit to teachers and students from taking these assessments. For example, student scores have not improved nationally on the “Nation’s Report Card” in over a decade, despite increased nationwide testing and accountability. Additionally, testing is experienced as an extremely stressful period that has little positive immediate benefit for students. Making the score reporting system more of a “two way street” in which students, educators, and their families can glean actionable information about how to support student success will make this much more of a useful process that will support student achievement, especially for students with disabilities and students who are approaching grade-level proficiency.
The Massachusetts Consortium for Innovative Education Assessment (MCIEA) is an example of many of the features discussed above. Many schools and districts within the state of Massachusetts have agreed to use performance tasks throughout the school year as a more robust measure of student learning. MCIEA also has an overview dashboard that measures school quality, not just through academic achievement. The dashboard considers school culture, access to resources, and student and community wellbeing. While MCIEA does not address student-specific feedback, it does provide alternative methods of measuring school quality. It is also an example of dashboards being used to disseminate school quality information to the larger public.
One example of how some of the recommendations above function is evidenced through the work done at ERB. This company is an assessment provider; however, they provide specific reports to the school leadership, the teachers and students, and their families. Each of these reports is tailored to the specific needs of each group and the team also helps facilitate webinars and other resources to help all groups to understand the test scores and how to use these best in improving student learning and outcomes.
Moving Federal Postsecondary Education Data to the States
Moving postsecondary education data collection to the states is the best way to ensure that the U.S. Department of Education can meet its legislative mandates in an era of constrained federal resources. Students, families, policymakers, and businesses need this data to make decisions about investments in education, but cuts to the federal government make it difficult to collect. The Commissioner of the National Center for Education Statistics should use their authority to establish state cooperative groups to collect and submit data from the postsecondary institutions in each state to the federal government, like the way that K12 schools report to the U.S. Department of Education (ED). With funding from the State Longitudinal Data System grant program and quality measures like the Common Education Data Standards, this new data reporting model will give more power to states, improve trust in education data, and make it easier for everyone to use the data.
Challenge and Opportunity
The Integrated Postsecondary Education Data System (IPEDS), was hit hard by staffing and contract cuts at the U.S. Department of Education in early 2025. Without the staff to collect and clean the data, or the contractors to run the websites and reports, this is the first time in its decades-long history that IPEDS may not be available to the public next year. IPEDS is a vast data collection, including information on grants and scholarships, tuition prices, graduation rates, and staffing levels. This has serious implications for students and their families choosing colleges, as well as for policymakers who want to ensure that these colleges graduate students on time, for businesses who want to find trained workers, and for everyone who cares about educating tomorrow’s citizens . Not to mention that these data are required by law under the Higher Education Act and the Civil Rights Act of 1964, among others.
Moving IPEDS data collection to the states is the best way to ensure that the data continue to be collected and released. States already play a large role in collecting data on elementary and secondary education, a model that could work for postsecondary data like IPEDS.
Why do we collect K12 data through states but not postsecondary data? K12 systems are substantially different from postsecondary data due to federal legislation. No Child Left Behind catalyzed the expansion of K12 data infrastructure, requiring regular reporting on student test scores, disaggregated achievement data for student groups, and information about teacher qualifications. Though the accountability measures attached to these data were controversial, the reporting processes they catalyzed vastly surpassed those in the postsecondary data system, which was built piecemeal over decades.
In K12 data systems, local education agencies report data to state education agencies who report to the National Center for Education Statistics (NCES). Reviews at each step in this process ensures that data are high quality and made available quickly for analysis. In postsecondary data, thousands of institutions individually report to NCES, which takes months to review and release the data for the whole country. Some institutions do report to a state coordinator, like Maryland which has one reporter for all public postsecondary institutions and one for all privates. The role of state coordinators varies widely across states. Using the state reporting model is an opportunity to further streamline this process.
Reporting postsecondary data at the state level has another benefit: it gives states control over future student-level data reporting. That is because states, in addition to fulfilling reporting requirements, also collect student-level data from K12 systems that can be linked to students’ postsecondary and workforce outcomes over time. These statewide longitudinal systems (SLDS) were supported through a federal grant program that began in 2006, and many of the measures collected are required for federal K12 reporting. Some SLDSs contain postsecondary measures like tuition and graduation rates, which are also collected by IPEDS. Though IPEDS does not require student-level data, advocates have been pushing for such data for several years. A student unit record system was proposed in the College Transparency Act. Moving IPEDS data collection to the states will help states develop the systems necessary to implement future student-level data collection in postsecondary education if this or similar legislation passes Congress.
Plan of Action
The NCES Commissioner should establish state-level groups to collect and submit IPEDS data. Instead of receiving thousands of individual reports from postsecondary institutions, NCES would receive 59, one from each state plus Washington D.C. and the territories that already report to IPEDS. For states that need support to manage this reporting process, ED could provide funding through an existing grant program. The IPEDS data definitions and reporting requirements would not change, but they could be improved through integration into other data standards.
This plan has several advantageous outcomes. First, IES would be able to meet its data collection and reporting requirements despite limited staff and funding. This increases efficiency and saves taxpayer dollars. Second, states would have access to their data more quickly, thus minimizing pressure on IES to release data on shorter timelines. This allows data users to work with more current information and give states the power to conduct their own analysis.
Step 1. The U.S. Department of Education can use its authority to establish state cooperatives to move IPEDS data collection to the states
Under 20 U.S.C. §9547, the NCES Commissioner has the authority to set up cooperatives to produce education statistics. These cooperatives could serve as the governing body and fiscal agent for collecting and submitting data from each state to the federal government. In states that already have a coordinator to submit IPEDS data, this cooperative group builds on existing processes for collecting, reviewing, and submitting data, including existing IPEDS state coordinators. The cooperatives should also involve state higher education executive officers, representatives from public, non-profit, and private postsecondary institutions, and experts in data systems and institutional research.
NCES should publish a charter that states can adopt as they organize their cooperatives. Multi-state education data groups, like the Multi-State Data Collaborative, have developed charters that could be used as a starting point. The sample charter should encourage the development of federated data systems, one model that has been successful in K12 data collection. Federated data systems, as opposed to centralized ones, operate on agreements to link and share data upon request, after which the linked data are destroyed. This model offers stronger protections for data privacy and can be established quickly.
Step 2. States should commit to financial support to support data submissions
States will also need financial support to develop or expand data storage systems, pay staff for quality reviews, and support data submissions. Some of this infrastructure already exists through funding from the IES SLDS grant program. Future grant awards could be used to fund the expansion of these systems to include IPEDS data collection and submission by setting priorities in the grant selection process.
In addition, NCES could contract with a technical assistance provider to support state infrastructure development. Something like the data academy offered by the State Higher Education Executive Officers Association (SHEEO) or the institutional research training offered by the Association for Institutional Research (AIR) would be useful to states that need personalized assistance.
Step 3. States should continue to use the data definitions and guidance developed by NCES
To ensure that the data retains the same high-quality standard of federal IPEDS collection, states should continue to use the data definitions and guidance developed by NCES. Further integrating these definitions with the Common Education Data System (CEDS), ensures that states understand and have access to these definitions. CEDS, the voluntary national standard for reporting K12 data, already includes some postsecondary data elements. Incorporating all IPEDS data definitions into CEDS will streamline data standards across K12 and postsecondary. CEDS also has recommendations for building out data infrastructure, like data stores (repositories for multiple databases and file types), helpful for states who need to expand theirs for this effort.
Conclusion
Moving IPEDS data collection to the states is the best way to ensure that NCES meets its legislative mandates in an era of constrained federal resources. This new collection method has other benefits as well. A more decentralized data collection will give more power to states to represent their unique institutions and contexts. By serving as stewards of this data, states will have better access to it, allowing for quicker reporting and analysis. With more access to and control over the data, trust and usage of the data will improve.
Unlike K12, there is more than one state-level education authority in many states. This will require more coordination among state higher education executive officers, state boards of higher education, and other state/regional actors such as accreditors. Private postsecondary institutions would also need to be at the table. The cooperative model provides a structure for bringing these entities together.
CTA includes a ban on using cooperatives to create a unit record data system, which may impact the use of this authority to create other collaborative systems. This ban is related to the larger debate over student unit record systems. Though IPEDS is not a unit record, it would still be helpful to review the language in CTA to ensure that the establishment of cooperatives would not be stymied by this provision in case CTA is passed.
IPEDS does not currently collect data at the student level. Because there is no individually identifiable data, privacy is not a greater concern under this proposal than it is under the current system for collecting data.
Investing in Young Children Strengthens America’s Global Leadership
Supporting the world’s youngest children is one of the smartest, most effective investments in U.S. strength and soft power. The cancellation of 83 percent of foreign assistance programs in early 2025, coupled with the dismantling of the U.S. Agency for International Development (USAID), not only caused unnecessary suffering of millions of young children in low-income countries, but also harmed U.S. security, economic competitiveness, and global leadership. As Congress crafts legislation to administer foreign assistance under a new America First focused State Department, it should recognize that renewed attention and support for young children in low-income countries will help meet stated U.S. foreign assistance priorities to make America safer, stronger, and more prosperous. Specifically, Congress should: (1) prioritize funding for programs that promote early childhood development; (2) bolster State Department staffing to administer resources efficiently; and (3) strengthen accountability and transparency of funding.
Challenge and Opportunity
Supporting children’s development through health, nutrition, education, and protection programs helps the U.S. achieve its national security and economic interests, including the Administration’s priorities to make America “safer, stronger, and more prosperous.” Investing in global education, for example, generates economic growth overseas, creating trade opportunities and markets for the U.S. In fact, 11 of America’s top 15 trading partners once received foreign aid. Healthy, educated populations are associated with less conflict and extremism, which reduces pressures on migration. Curbing the spread of infectious diseases like HIV/AIDS and Ebola makes Americans safer from disease both abroad and at home. As a diplomacy tool, providing support for early childhood development, which is a priority in many partner countries, increases U.S. goodwill and influence in these countries and contributes to its geopolitical competitiveness.
Helping young children thrive in low-income countries is a high-return investment in stable economies, skilled workforces, and a stronger America on the world stage. In a July 2025 press release, the State Department recognized how investing in children and families globally contributes to America’s national development and priorities:
Supporting children and families strengthens the foundation of any society. Investing in their protection and well-being is a proven strategy for ensuring American security, solidifying American strength, and increasing American prosperity. When children and families around the world thrive, nations flourish.
The first five years of a child’s life is a period of unprecedented brain development. Investments in early childhood programs – including parent coaching, child care, and quality preschool – yield large and long-term benefits for individuals and society-at-large, up to a 13% return on investment, particularly when these interventions are targeted to the most vulnerable and disadvantaged populations. Despite the promise of early childhood interventions, 43% of children under five in low- and middle-income countries are at elevated risk of poor development, leaving them vulnerable to the long-term negative impacts of adversity, such as poverty, malnutrition, illness, and exposure to violence. The costs of inaction are high; countries that underinvest in young children are more likely to have less healthy and educated populations and to struggle with higher unemployment and lower GDPs.
Informed by this powerful evidence, the bipartisan Global Child Thrive Act of 2020 required U.S. Government agencies to develop and implement policies to advance early childhood development – the cognitive, physical, social, and emotional development of children up to age 8 – in partner countries. This legislation supported early childhood development through nutrition, education, health, and water, sanitation, and hygiene interventions. It mandated the U.S. Government Special Advisor for Children in Adversity to lead a coordinated, comprehensive, and effective U.S. government response through international assistance. The bipartisan READ Act complements the Thrive Act by requiring the U.S. to implement an international strategy for basic education, starting with early childhood care and education.
Three examples of USAID-funded early childhood programs terminated in 2025 illustrate how investments in young children not only achieve multiple development and humanitarian goals, but also address U.S. priorities to make America safer, stronger, and more prosperous:
- Cambodia. Southeast Asia is of strategic importance to U.S. security given risks of China’s political and military influence in the region. The Integrated Early Childhood Development activity ($20 million) helped young children (ages 0-2) and their caregivers through improved nutrition, responsive caregiving, agricultural practices, better water, sanitation, and hygiene, and support for children with developmental delays or disabilities. Within a week of cancellation, China filled the USAID vacuum and gained a soft-power advantage by announcing funding for a program to achieve almost identical goals.
- Honduras. Foreign assistance mitigates poverty, instability, and climate shocks that push people to migrate from Central America (and other regions) to the U.S. The Early Childhood Education for Youth Employability activity ($8 million) aimed to improve access to quality early learning for more than 100,000 young children (ages 3-6) while improving the employability and economic security for 25,000 young mothers and fathers, a two-generation approach to address drivers of irregular migration.
- Ethiopia. The U.S. has a long-standing partnership with Ethiopia to increase stability and mitigate violent extremism in the Horn of Africa. Fostering peace and promoting security, in turn, expands markets for American businesses in the region. Through a public-private partnership with the LEGO Foundation, the Childhood Development Activity ($46 million) reached 100,000 children (ages 3-6+) in the first two years of the program with opportunities for play-based learning and psycho-social support for coping with negative effects of conflict and drought.
Drastic funding cuts have jeopardized the wellbeing of vulnerable children worldwide and the “soft power” the U.S. has built through relationships with more than 175 partner countries. In January 2025, the Trump Administration froze all foreign assistance and began to dismantle the USAID, the lead coordinating agency for children’s programs under the Global Child Thrive Act and READ Act. By March 2025, sweeping cuts ended most USAID programs focused on children’s education, health, water and sanitation, nutrition, infectious diseases (malaria, tuberculosis, neglected tropical diseases, and HIV/AIDS), and support for orphans and vulnerable children. In total, the U.S. eliminated around $4 billion in foreign assistance intended for children in the world’s poorest countries. As a result, an estimated 378,000 children have died from preventable illnesses, such as HIV, malaria, and malnutrition.
In July 2025, Congress voted to approve the Administration’s rescission package, which retracts nearly $8 billion of FY25 foreign assistance funding that was appropriated, but not yet spent. This includes support for 6.6 million orphans and vulnerable children (OVC) and $142 million in core funding to UNICEF, the UN agency which helps families in emergencies and vulnerable situations globally. An additional $5 billion of foreign assistance funding expired at the end of the fiscal year while being withheld through a pocket rescission.
As Congress works to reauthorize the State Department, and what remains of USAID, it should see that helping young children globally supports both American values and strategic interests.
Recent U.S. spending on international children’s programs accounted for only 0.09% of the total federal budget and only around 10% of foreign assistance expenditure. If Congress does not act, this small, but impactful funding is at risk of disappearing from the FY 2026 budget.
Plan of Action
For decades, the U.S. has been a leader in international development and humanitarian assistance. Helping the world’s youngest children reach their potential is one of the smartest, most effective investments the U.S. government can make. Congress needs to put in place funding, staffing, and accountability mechanisms that will not only support the successful implementation of the Global Child Thrive Act, but also meet U.S. foreign policy priorities.
Recommendation 1. Prioritize funding for early childhood development through the Department of State
In the FY26 budget currently under discussion, Congress has the responsibility to fund global child health, education, and nutrition programs under the authority of the State Department. These child-focused programs align with America’s diplomatic and economic interests and are vital to young children’s survival and well-being globally.
To promote early childhood development specifically, the Global Child Thrive Act should be reauthorized under the auspices of the State Department. While there is bipartisan support in the House Foreign Affairs Committee to extend authorization of the Global Child Thrive Act through 2027, the current bill had not made it to the House floor as of October 2025, and the Senate bill was delayed by a federal government shutdown.
Congress should pass legislation to appropriate $1.5 billion in FY26 funding for life-saving and life-changing programs for young children, including:
- The Vulnerable Children’s Account which funds multi-sectoral, evidence-based programs that support the objectives of the Global Child Thrive Act and the Advancing Protection and Care for Children in Adversity Strategy ($50 million).
- PEPFAR 10% Orphans and Vulnerable Children Set Aside which protects and promotes the holistic health and development of children affected by HIV/AIDS ($710 million).
- UNICEF core funding, given the agency’s track record in advancing early childhood development programs in development and humanitarian settings ($300 million).
- Commitments to government-philanthropy partnerships with pooled funds that prioritize the early years including the Global Partnership for Education, Education Cannot Wait, the Early Learning Partnership, and the Global Financing Facility ($430 million).
Funding should be written into legislation so that it is protected from future cuts.
Recommendation 2. Adequately staff the State Department to coordinate early childhood programs
The State Department needs to rebuild expertise on global child development that was lost when USAID collapsed. As a first step, current officials need to be briefed on relevant legislation including the Global Child Thrive Act and the READ Act. In response to the reduced capacity, Congress should fund a talent pipeline in order to attract a cadre of professionals within the State Department in Washington, DC and at U.S. Embassies who can focus on early years issues across sectors and funding streams. Foreign nationals who have a deep understanding of local contexts should be considered for these roles.
In the context of scarce resources, coordination and collaboration is more important than ever. The critical role of the USG Special Advisor for Children in Adversity should be formally transferred to the State Department to provide technical leadership and implementation support for children’s issues. Within the reorganized State Department, the Special Advisor should sit in the office of the Under Secretary for Foreign Assistance, Humanitarian Affairs and Religious Freedom (F), where s/he can serve as a leading voice for children and foster inter-agency coordination across the Departments of Agriculture and Labor, the Millennium Challenge Corporation, etc.
Congress also should seek clarification on how the new Special Envoy for Best Future Generations will contribute specifically to early childhood development. The State Department appointed the Special Envoy in June 2025 as a liaison for initiatives impacting the well-being of children under age 18 in the U.S. and globally. In the past three months, the Special Envoy has met with U.S. government officials at the White House and State Department, representatives from 14 countries at the U.N., and non-governmental organizations to discuss coordinated action on children’s issues, such as quality education, nutritious school meals, and ending child labor and trafficking.
Recommendation 3. Increase accountability and transparency for funds allocated for young children
Increased oversight over funds can improve efficiency, prevent delays, and reduce risks of funds expiring before they reach intended families. The required reporting on FY24 programs is overdue and should be submitted to Congress by the end of December 2025.
Going forward, Congress should require the State Department to report regularly and testify on how money is being spent on young children. Reporting should include evidence-based measures of Return on Investment (ROI) to help demonstrate the impact of early childhood programs. In addition, the Office of Foreign Assistance should issue a yearly report to Congress and to the public which tracks annual inter-agency progress toward implementing the Global Child Thrive Act using a set of indicators, including the approved pre-primary indicator and other relevant and feasible indicators across age groups, programs, and sectors.
Conclusion
Investing in young children’s growth and learning around the world strengthens economies, builds goodwill, and secures America’s position as a trusted global leader. To help reach U.S. foreign policy priorities, Congress must increase funding, staffing and accountability of the State Department’s efforts to promote early childhood development, while also strengthening multi-agency coordination and accountability for achieving results. The Global Child Thrive Act provides the legislative mandate and a technical roadmap for the U.S. Government to follow.
By investing only about 1% of the federal budget, USAID contributed to political stability, economic growth, and good will with partner countries. A new Lancet article estimates USAID funding saved 30 million children’s lives between 2001 and 2021 and was associated with a 32% reduction in under five deaths in low- and middle-income countries. In the past five years alone, funding supported the learning of 34 million children. USAID spending was heavily examined by the State Department, Congress, the Office of Management and Budget, and the Office of the Inspector General. Recent claims of waste, fraud, and abuse are inaccurate, exaggerated or taken out of context.
The public strongly supports many aspects of foreign assistance that benefit children. A recent Pew Research Study found that around 80% of Americans agreed that the U.S. should provide medicine and medical supplies, as well as food and clothing, to people in developing countries. In terms of political support, children’s programs are viewed favorably by lawmakers on both sides of the aisle. For example, the Global Child Thrive Act was introduced by Representatives Joaquin Castro (D-TX) and Brian Fitzpatrick (R-PA) and Senators Roy Blunt (R-MO) and Christopher Coons (D-DE) and passed with bipartisan support from Congress.
AI Implementation is Essential Education Infrastructure
State education agencies (SEAs) are poised to deploy federal funding for artificial intelligence tools in K–12 schools. Yet, the nation risks repeating familiar implementation failures that have limited educational technology for more than a decade. The July 2025 Dear Colleague Letter from the U.S. Department of Education (ED) establishes a clear foundation for responsible artificial intelligence (AI) use, and the next step is ensuring these investments translate into measurable learning gains. The challenge is not defining innovation—it is implementing it effectively. To strengthen federal–state alignment, upcoming AI initiatives should include three practical measures: readiness assessments before fund distribution, outcomes-based contracting tied to student progress, and tiered implementation support reflecting district capacity. Embedding these standards within federal guidance—while allowing states bounded flexibility to adapt—will protect taxpayer investments, support educator success, and ensure AI tools deliver meaningful, scalable impact for all students.
Challenge and Opportunity
For more than a decade, education technology investments have failed to deliver meaningful results—not because of technological limitations, but because of poor implementation. Despite billions of dollars in federal and local spending on devices, software, and networks, student outcomes have shown only minimal improvement. In 2020 alone, K–12 districts spent over $35 billion on hardware, software, curriculum resources, and connectivity—a 25 percent increase from 2019, driven largely by pandemic-related remote learning needs. While these emergency investments were critical to maintaining access, they also set the stage for continued growth in educational technology spending in subsequent years.
Districts that invest in professional development, technical assistance, and thoughtful integration planning consistently see stronger results, while those that approach technology as a one-time purchase do not. As the University of Washington notes, “strategic implementation can often be the difference between programs that fail and programs that create sustainable change.” Yet despite billions spent on educational technology over the past decade, student outcomes have remained largely unchanged—a reflection of systems investing in tools without building the capacity to understand their value, integrate them effectively, and use them to enhance learning. The result is telling: an estimated 65 percent of education software licenses go unused, and as Sarah Johnson pointed out in an EdWeek article, “edtech products are used by 5% of students at the dosage required to get an impact”.
Evaluation practices compound the problem. Too often, federal agencies measure adoption rates instead of student learning, leaving educators confused and taxpayers with little evidence of impact. As the CEO of the EdTech Evidence Exchange put it, poorly implemented programs “waste teacher time and energy and rob students of learning opportunities.” By tracking usage without outcomes, we perpetuate cycles of ineffective adoption, where the same mistakes resurface with each new wave of innovation.
Implementation Capacity is Foundational
A clear solution entails making implementation capacity the foundation of federal AI education funding initiatives. Other countries show the power of this approach. Singapore, Estonia, and Finland all require systematic teacher preparation, infrastructure equity, and outcome tracking before deploying new technologies, recognizing, as a Swedish edtech implementation study found, that access is necessary but not sufficient to achieve sustained use. These nations treat implementation preparation as essential infrastructure, not an optional add-on, and as a result, they achieve far better outcomes than market-driven, fragmented adoption models.
The United States can do the same. With only half of states currently offering AI literacy guidance, federal leadership can set guardrails while leaving states free to tailor solutions locally. Implementation-first policies would allow federal agencies to automate much of program evaluation by linking implementation data with existing student outcome measures, reducing administration burden and ensuring taxpayer investments translate into sustained learning improvements.
The benefits would be transformational:
- Educational opportunity. Strong implementation support can help close digital skill gaps and reduce achievement disparities. Rural districts could gain greater access to technical assistance networks, students with disabilities could benefit from AI tools designed with accessibility at their core, and all students could build the AI literacy necessary to participate in civic and economic life. Recent research suggests that strategic implementation of AI in education holds particular promise for underserved and geographically isolated communities.
- Workforce development. Educators could be equipped to use AI responsibly, expanding coherent career pathways that connect classroom expertise to emerging roles in technology coaching, implementation strategy, and AI education leadership. Students graduating from systematically implemented AI programs would enter the workforce ready for AI-driven jobs, reducing skills gaps and strengthening U.S. competitiveness against global rivals.
In short, implementation is not a secondary concern; it is the primary determinant of whether AI in education strengthens learning or repeats the costly failures of past ed-tech investments. Embedding implementation capacity reviews before large-scale rollout—focused on educator preparation, infrastructure adequacy, and support systems—would help districts identify strengths and gaps early. Paired with outcomes-based vendor contracts and tiered implementation support that reflects district capacity, this approach would protect taxpayer dollars while positioning the United States as a global leader in responsible AI integration.
Plan of Action
AI education funding must shift to being both tool-focused and outcome-focused, reducing repeated implementation failures and ensuring that states and districts can successfully integrate AI tools in ways that strengthen teaching and learning. Federal guidance has made progress in identifying priority use cases for AI in education. With stronger alignment to state and local implementation capacity, investments can mitigate cycles of underutilized tools and wasted resources.
A hybrid approach is needed: federal agencies set clear expectations and provide resources for implementation, while states adapt and execute strategies tailored to local contexts. This model allows for consistency and accountability at the national level, while respecting state leadership.
Recommendation 1. Establish AI Education Implementation Standards Through Federal–State Partnership
To safeguard public investments and accelerate effective adoption, the Department of Education, working in partnership with state education agencies, should establish clear implementation standards that ensure readiness, capacity, and measurable outcomes.
- Implementation readiness benchmarks. Federal AI education funds should be distributed with expectations that recipients demonstrate the enabling systems necessary for effective implementation—including educator preparation, technical infrastructure, professional learning networks, and data governance protocols. ED should provide model benchmarks while allowing states to tailor them to local contexts.
- Dedicated implementation support. Funding streams should ensure AI education investments include not only tool procurement but also consistent, evidence-based professional development, technical assistance, and integration planning. Because these elements are often vendor-driven and uneven across states, embedding them in policy guidance helps SEAs and local education agencies (LEAs) build sustainable capacity and protect against ineffective or commodified approaches—ensuring schools have the human and organizational capacity to use AI responsibly and effectively.
- Joint oversight and accountability. ED and SEAs should collaborate to monitor and publicly share progress on AI education implementation and student outcomes. Metrics could be tied to observable indicators, such as completion of AI-focused professional development, integration of AI tools into instruction, and adherence to ethical and data governance standards. Transparent reporting builds public trust, highlights effective practices, and supports continuous improvement, while recognizing that measures of quality will evolve with new research and local contexts.
Recommendation 2. Develop a National AI Education Implementation Infrastructure
The U.S. Department of Education, in coordination with state agencies, should encourage a national infrastructure that helps and empowers states to build capacity, share promising practices, and align with national economic priorities.
- Regional implementation hubs. ED should partner with states to create regional AI education implementation centers that provide technical assistance, professional development, and peer learning networks. States would have flexibility to shape programming to their context while benefiting from shared expertise and federal support.
- Research and evaluation. ED, in coordination with the National Science Foundation (NSF), should conduct systematic research on AI education implementation effectiveness and share annual findings with states to inform evidence-based decision-making.
- Workforce alignment. Federal and state education agencies should continue to coordinate AI education implementation with existing workforce development initiatives (Department of Labor) and economic development programs (Department of Commerce) to ensure AI skills align with long-term economic and innovation priorities.
Recommendation 3. Adopt Outcomes Based Contracting Standards for AI Education Procurement
The U.S. Department of Education should establish outcomes based contracting (OBC) as a preferred procurement model for federally supported AI education initiatives. This approach ties vendor payment directly to demonstrated student success, with at least 40% of contract value contingent on achieving agreed-upon outcomes, ensuring federal investments deliver measurable results rather than unused tools.
- Performance-based payment structures. ED should support contracts that include a base payment for implementation support and contingent payments earned only as students achieve defined outcomes. Payment should be based on individual student achievement rather than aggregate measures, ensuring every learner benefits while protecting districts from paying full price for ineffective tools.
- Clear outcomes and mutual accountability:. Federal guidance should encourage contracts that specify student populations served, measurable success metrics tied to achievement and growth, and minimum service requirements for both districts and vendors (including educator professional learning, implementation support, and data sharing protocols).
- Vendor transparency and reporting. AI education vendors participating in federally supported programs should provide real-time implementation data, document effectiveness across participating sites, and report outcomes disaggregated by student subgroups to identify and address equity gaps.
- Continuous improvement over termination. Rather than automatic contract cancellation when challenges arise, ED should establish systems that prioritize joint problem-solving, technical assistance, and data-driven adjustments before considering more severe measures.
Recommendation 4. Pilot Before Scaling
To ensure responsible, scalable, and effective integration of AI in education, ED and SEAs should prioritize pilot testing before statewide adoption while building enabling conditions for long-term success.
- Pilot-to-scale strategy. Federal and state agencies could jointly identify pilot districts representing diverse contexts (rural, urban, and suburban) to test AI implementation models before large-scale rollout. Lessons learned would inform future funding decisions, minimize risk, and increase effectiveness for states and districts.
- Enabling conditions for sustainability. States could build ongoing professional learning systems, technical support networks, and student data protections to ensure tools are used effectively over time.
- Continuous improvement loop. ED could coordinate with states to develop feedback systems that translate implementation data into actionable improvements for policy, procurement, and instruction, ensuring educators, leaders, and students all benefit.
Recommendation 5. Build a National AI Education Research & Development Network
To promote evidence-based practice, federal and state agencies should co-develop a coordinated research and development infrastructure that connects implementation data, policy learning to practice, and global collaboration.
- Implementation research partnerships. Federal agencies (ED, NSF) should partner with states and research institutions to fund systematic studies on effective AI education implementation, with emphasis on scalability and outcomes across diverse student populations. Rather than creating a new standalone program, this would coordinate existing ED and NSF investments while expanding state-level participation.
- Testbed site networks. States should designate urban, suburban, and rural AI education implementation labs or “sandboxes”, modeled on responsible AI testbed infrastructure, where funding supports rigorous evaluation, cross-district peer learning, and local adaptation.
- Evidence-to-policy pipeline. Federal agencies should integrate findings from these research-practice partnerships into national AI education guidance, while states embed lessons learned into local technical assistance and professional development.
- National leadership and evidence sharing. Federal and state agencies should establish mechanisms to share evidence-based approaches and emerging insights, positioning the U.S. as a leader in responsible AI education implementation. This collaboration should leverage continuous, practice-informed research, called living evidence, which integrates real-world implementation data, including responsibly shared vendor-generated insights, to inform policy, guide best practices, and support scalable improvements.
Conclusion
The Department’s guidance on AI in education marks a pivotal step toward modernizing teaching and learning nationwide. To realize the promise of AI in education, funding should support both the acquisition of tools and the strategies that ensure their effective implementation. To realize its promise, we must shift from funding tools to funding effective implementation. Too often, technologies are purchased only to sit on the shelf while educators lack the support to integrate them meaningfully. International evidence shows that countries investing in teacher preparation and infrastructure before technology deployment achieve better outcomes and sustain them.
Early research also suggests that investments in professional development, infrastructure, and systems integration substantially increase the long-term impact of educational technology. Prioritizing these supports reduces waste and ensures federal dollars deliver measurable learning gains rather than unused tools. The choice before us is clear: continue the costly cycle of underused technologies or build the nation’s first sustainable model for AI in education—one that makes every dollar count, empowers educators, and delivers transformational improvements in student outcomes.
Clear implementation expectations don’t slow innovation—they make it sustainable. When systems know what effective implementation looks like, they can scale faster, reduce trial-and-error costs, and focus resources on what works to ultimately improve student outcomes.
Quite the opposite. Implementation support is designed to build capacity where it’s needed most. Embedding training, planning, and technical assistance ensures every district, regardless of size or resources, can participate in innovation on an equal footing.
AI education begins with people, not products. Implementation guidelines should help educators improve their existing skills to incorporate AI tools into instruction, offer access to relevant professional learning, and receive leadership support, so that AI enhances teaching and learning.
Implementation quality is multi-dimensional and may look different depending on local context. Common indicators could include: educator readiness and training, technical infrastructure, use of professional learning networks, integration of AI tools into instruction, and adherence to data governance protocols. While these metrics provide guidance, they are not exhaustive, and ED and SEAs will iteratively refine measures as research and best practices evolve. Transparent reporting on these indicators will help identify effective approaches, support continuous improvement, and build public trust.
Not when you look at the return. Billions are spent on tools that go underused or abandoned within a year. Investing in implementation is how we protect those investments and get measurable results for students.
The goal isn’t to add red tape—it’s to create alignment. States can tailor standards to local priorities while still ensuring transparency and accountability. Early adopters can model success, helping others learn and adapt.
Getting ‘What Works’ in Education into the Hands of Teachers and Students
For more than twenty years, the Department of Education’s Institute of Education Sciences (IES) has served as a guiding light for U.S. education research. Its work has pushed the field forward, supporting high-quality, rigorous research in a field known for its reliance on word-of-mouth over science. The studies it funded have answered critical questions about “what works” in key areas like improving reading achievement and increasing associate’s degree attainment. It has served as a centralized, objective repository of data and research to inform educator practice, school district procurement, and state legislation. Even with these achievements, IES has room to grow to ensure that cutting edge research makes it into the hands of those who need it, when they need it.
As part of IES’s ongoing leadership, it has been moving to adopt an approach called “living evidence.” Use of this emerging approach would keep decision-makers up-to-date about the solutions that have been rigorously tested and hold potential to address core challenges facing American schools: How do we respond to astonishingly low scores on NAEP tests? Are there evidence-based approaches we can employ in schools that will help end the national “epidemic” of social isolation? How do we ensure that both our college graduates and those who do not wish to attend college can obtain high-paying jobs?
Harnessing Innovation
This week, FAS and its partners at the Future Evidence Foundation (FEF) released a new report: Harnessing Innovation: Options for Implementing Living Evidence at the Institute of Education Sciences (which you can download using the button on the left-hand side of this page). It describes in great depth how IES could build on existing processes to produce living reviews as part of their What Works Clearinghouse.
Over time, many have come to believe that the agency moves too slowly to be responsive to practitioner needs, that its approach to sharing research findings does not make clear enough for users whether an intervention is backed by strong evidence, and that it needs to better support innovation. Living evidence is not the silver bullet that can address all of these issues in full. And yet, its proposed adoption could be the foundation for a sea change, allowing IES to more responsively share when new best practices and innovative approaches are identified in the academic literature through the What Works Clearinghouse. However, recent seismic shifts have severely damaged IES’s capability to move forward with using this innovative model: the Trump Administration’s cancellation of nearly all active contracts, including those that made the What Works Clearinghouse’s work possible, and a reduction-in-force (RIF) that led to the firing of nearly all of IES’s staff.
Report Insights and Key Takeaways
The report was produced as the product of a year-long partnership, where FAS and FEF were granted the opportunity to engage deeply with leadership and staff from IES. Our team learned about their day-to-day processes and potential roadblocks on the path to change. The final ‘options memo’ was carefully constructed to give the IES team a set of realistic approaches they could take to addressing some of their greatest challenges: discerning the topics where knowing ‘what works’ could be most beneficial to the more than 100 million people that the U.S. education system touches, finding the highest-quality academic research from the thousands of studies produced each year on education, and doing this work within a limited set of resources (IES funding made up less than 1% of the Department of Education’s overall spending in 2024). These recommendations were informed by conversations with many of those that knew IES’s work best, from its contractors to the readers of the What Works Clearinghouse’s reports. They offer feasible ways for IES to craft more efficient processes for developing their resources.
From this process, our team had two key takeaways:
- While the What Works Clearinghouse faces challenges in achieving its goal to make rigorous research accessible to policymakers and practitioners, the staff and contractors that led its work were steadfast in their dedication to improving its resources.
- Living evidence is not just the way of the future in academic circles beyond the U.S., but is a model that is feasible to implement in large federal agencies.
However, canceling contracts and firing experienced, dedicated staff has kneecapped IES’s ability to make the changes necessary to begin creating and deploying living reviews. An opportunity may now be missed to better align IES’s work not just to what their constituents need, but also to how a global community is moving forward in thinking about how to better connect evidence to policy and practice. While the administration has signaled its intent to rebuild IES in the future, it will take time to enact a new vision, and to fix what will inevitably break in the absence of staff to support key resources including the Regional Education Laboratories, the Education Resources Information Center (ERIC), and the What Works Clearinghouse. In that time, peer government R&D agencies such as UK Research and Innovation and major philanthropic organizations such as the Wellcome Trust will step up to lead the way on developing infrastructure that supports the use of emerging technology to build living reviews, moving the rest of the world forward while U.S. government agencies remain in the past.
Living Evidence Global Community of Practice
Living evidence still has a path forward in the U.S, and opportunities to continue to grow along with the growing global movement. Innovation outside of government for living evidence holds promise for U.S. education stakeholders, led by work through the HEDCO Institute at the University of Oregon and the clearinghouse Blueprints for Healthy Youth Development. The recommendations in the report will offer value for such organizations as they work to shift toward living systematic reviews, setting the tone for best practice in evidence synthesis while IES is in transition. FAS and FEF will continue to help support this work through our convening of a Living Evidence Community of Practice.
Further, it is our hope that the learnings shared in the report will be considered in re-imagining future iterations of IES. In its short 23 years of existence, IES has raised the rigor of evidence that influences education and made strides in both generating and summarizing evidence that has the potential to inform practice. Even if the administration moves forward in its stated aim of “returning education back to the states”, state and local leaders will still need IES’s resources to understand how best to disburse their budgets. Employing a living approach to evidence synthesis, disseminated at a national level, is a streamlined way to enable evidence-based decision-making nationwide. If the administration genuinely prioritizes government efficiency, the report’s recommendations warrant serious consideration.
Tending Tomorrow’s Soil: Investing in Learning Ecosystems
“Tending soil.”
That’s how Fred Rogers described Mister Rogers’ Neighborhood, his beloved television program that aired from 1968 to 2001. Grounded in principles gleaned from top learning scientists, the Neighborhood offered a model for how “learning ecosystems” can work in tandem to tend the soil of learning.
Today, a growing body of evidence suggests that Rogers’ model was not only effective, but that real-life learning ecosystems – networks that include classrooms, living rooms, libraries, museums, and more – may be the most promising approach for preparing learners for tomorrow. As such, cities and regions around the world are constructing thoughtfully designed ecosystems that leverage and connect their communities’ assets, responding to the aptitudes, needs, and dreams of the learners they serve.
Efforts to study and scale these ecosystems at local, state, and federal levels would position the nation’s students as globally competitive, future-ready learners.
The Challenge
For decades, America’s primary tool for “tending soil” has been its public schools, which are (and will continue to be) the country’s best hope for fulfilling its promise of opportunity. At the same time, the nation’s industrial-era soil has shifted. From the way our communities function to the way our economy works, dramatic social and technological upheavals have remade modern society. This incongruity – between the world as it is and the world that schools were designed for – has blunted the effectiveness of education reforms; heaped systemic, society-wide problems on individual teachers; and shortchanged the students who need the most support.
“Public education in the United States is at a crossroads,” notes a report published by the Alliance for Learning Innovation, Education Reimagined, and Transcend: “to ensure future generations’ success in a globally competitive economy, it must move beyond a one-size-fits-all model towards a new paradigm that prioritizes innovation that holds promise to meet the needs, interests, and aspirations of each and every learner.”
What’s needed is the more holistic paradigm epitomized by Mister Rogers’ Neighborhood: a collaborative ecosystem that sparks engaged, motivated learners by providing the tools, resources, and relationships that every young person deserves.
The Opportunity
With components both public and private, virtual and natural, “learning ecosystems” found in communities around the world reflect today’s connected, interdependent society. These ecosystems are not replacements for schools – rather, they embrace and support all that schools can be, while also tending to the vital links between the many places where kids and families learn: parks, libraries, museums, afterschool programs, businesses, and beyond. The best of these ecosystems function as real-life versions of Mister Rogers’ Neighborhood: places where learning happens everywhere, both in and out of school. Where every learner can turn to people and programs that help them become, as Rogers used to say, “the best of whoever you are.”
Nearly every community contains the components of effective learning ecosystems. The partnerships forged within them can – when properly tended – spark and spread high-impact innovations; support collaboration among formal and informal educators; provide opportunities for young people to solve real-world problems; and create pathways to success in a fast-changing modern economy. By studying and investing in the mechanisms that connect these ecosystems, policymakers can build “neighborhoods” of learning that prepare students for citizenship, work, and life.
Plan of Action
Learning ecosystems can be cultivated at every level. Whether local, state, or federal, interested policymakers should:
Establish a commission on learning ecosystems. Tasked with studying learning ecosystems in the U.S. and abroad, the commission would identify best practices and recommend policy that 1) strengthens an area’s existing learning ecosystems and/or 2) nurtures new connections. Launched at the federal, state, or local level and led by someone with a track record for getting things done, the commission should include representatives from various sectors, including early childhood educators, K-12 teachers and administrators, librarians, researchers, CEOs and business leaders, artists, makers, and leaders from philanthropic and community-based organizations. The commission will help identify existing activities, research, and funding for learning ecosystems and will foster coordination and collaboration to maximize the effectiveness of the ecosystem’s resources.
A 2024 report by Knowledge to Power Catalysts notes that these cross-sector commissions are increasingly common at various levels of government, from county councils to city halls. As policymakers establish interagency working groups, departments of children and youth, and networks of human services providers, “such offices at the county or municipal level often play a role in cross-sector collaboratives that engage the nonprofit, faith, philanthropic, and business communities as well.”
Pittsburgh’s Remake Learning ecosystem, for example, is steered by the Remake Learning Council, a blue-ribbon commission of Southwestern Pennsylvania leaders from education, government, business, and the civic sector committed to “working together to support teaching, mentoring, and design – across formal and informal educational settings – that spark creativity in kids, activating them to acquire knowledge and skills necessary for navigating lifelong learning, the workforce, and citizenship.”
Establish a competitive grant program to support pilot projects. These grants could seed new ecosystems and/or support innovation among proven ecosystems. (Several promising ecosystems are operating throughout the country already; however, many are excluded from funding opportunities by narrowly focused RFPs.) This grant program can be administered by the commission to catalyze and strengthen learning ecosystems at the federal, state, or local levels. Such a program could be modeled after:
- The National Science Foundation’s efforts to nurture “an effective and inclusive STEM education ecosystem that prepares PreK-12 students for STEM careers, fosters entrepreneurship, and provides all people, particularly those from under-served and underrepresented populations, with access to excellent STEM education throughout their lifetimes.”
- The Pennsylvania Department of Education’s PAsmart program, a $30 million initiative designed to enhance the state’s education and workforce development efforts. PAsmart’s “Advancing Grants” of up to $500,000 each support cross-sector ecosystems that include educators from different districts and agencies. The South Fayette Township School District near Pittsburgh received an Advancing Grant to expand computer science not only for its own learners, but also for those in seven neighboring districts across four counties. Representing a microcosm of Pennsylvania, educators from these districts work side-by-side to identify crucial skills and design new ways to teach them, enhancing their collective impact.
- Remake Learning’s Moonshot grants, which award funds that encourage people to take risks, try new things, and explore the limits of what’s possible. These grants have been leveraged to strengthen ecosystem connections in cities and regions around the world.
Host a summit on learning ecosystems. Leveraging the gravitas of a government and/or civic institution such as the White House, a governor’s mansion, or a city hall, bring members of the commission together with learning ecosystem leaders and practitioners, along with cross-sector community leaders. A summit will underscore promising practices, share lessons learned, and highlight monetary and in-kind commitments to support ecosystems. The summit could leverage for learning ecosystems the philanthropic commitments model developed and used by previous presidential administrations to secure private and philanthropic support. Visit remakelearning.org/forge to see an example of one summit’s schedule, activities, and grantmaking opportunities.
Establish an ongoing learning ecosystem grant program for scaling and implementing lessons learned. This grant program could be administered at the federal, state, or local level – by a city government, for example, or by partnerships like the Appalachian Regional Commission. As new learning ecosystems form and existing ones evolve, policymakers should continue to provide grants that support learning ecosystem partnerships between communities that allow innovations in one city or region to take root in another.
Invest in research, publications, convenings, outreach, and engagement efforts that highlight local ecosystems and make their work more visible, especially for families. The ongoing grant program can include funding for opportunities that elevate the benefits of learning ecosystems. Events such as Remake Learning Days – an annual festival billed as “the world’s largest open house for teaching and learning” and drawing an estimated 300,000 attendees worldwide – build demand for learning ecosystems among parents, caregivers, and community leaders, ensuring grassroots buy-in and lasting change.
This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.
Within a learning ecosystem, students aren’t limited to classrooms, schools, or even their own districts – nor do they have to travel far to find opportunities that light them up. By blurring the lines between “in school” and “out of school,” ecosystems make learning more engaging, more relevant, and even more joyful. Pittsburgh’s Remake Learning ecosystem, for example, connects robotics professionals with classroom teachers to teach coding and STEM. Librarians partner with teaching artists to offer weeklong deep dives into topics attractive to young people. A school district launches a program – say, a drone academy for girls – and opens it up to learners from neighboring districts.
As ecosystems expand to include more members, the partnerships formed within them spark exciting, ever-evolving opportunities for learners.
Within an ecosystem, learning isn’t just for young people. An ecosystem’s out-of-school components – businesses, universities, makerspaces, and more – bring real-world problems directly to learners, leading to tangible change in communities and a more talented, competitive future workforce. In greater Washington, D.C., for example, teachers partner with cultural institutions to develop curricula based on students’ suggestions for improving the city. In Kansas City, high schoolers partner with entrepreneurs and health care professionals to develop solutions for everything from salmonella poisoning to ectopic pregnancy. And in Pittsburgh, public school students are studying cybersecurity, training for aviation careers, conducting cutting-edge cancer research, and more.
Learning ecosystems also support educators. In Pittsburgh, educators involved in Remake Learning note that “they feel celebrated and validated in their work,” writes researcher Erin Gatz. Moreover, the ecosystem’s “shared learning and supportive environment were shown to help educators define or reinforce their professional identity.”
Learning ecosystems can aid local economies, too. In eastern Kentucky, an ecosystem of school districts, universities, and economic development organizations empowers students to reimagine former coal land for entrepreneurial purposes. And in West Virginia, an ecosystem of student-run companies has helped the state recover from natural disasters.
Since 2007, Pittsburgh’s Remake Learning has emerged as the most talked-about learning ecosystem in the world. Studied by scholars, recognized by heads of state, and expanding to include more then 700 schools, libraries, museums, and other sites of learning, Remake Learning has – through two decades of stewardship – inspired more than 40 additional learning ecosystems. Meanwhile, the network’s Moonshot Grants are seeding new ecosystems across the nation and around the world.
Global demand for learning ecosystems is growing. A 2020 report released by HundrED, a Finland-based nonprofit, profiles 16 of the most promising examples operating in the United States. Likewise, the World Innovation Summit for Education explores nine learning ecosystems operating worldwide: “Across the globe, there is a growing consensus that education demands radical transformation if we want all citizens to become future-ready in the face of a more digitally enabled, uncertain, and fast-changing world,” the summit notes. “Education has the potential to be the greatest enabler of preparing everyone, young and old, for the future, yet supporting learning too often remains an issue for schools alone.”
Learning ecosystems support collaboration and community among public schools, connecting classrooms, schools, and educators across diverse districts. Within Remake Learning, for example, a cohort of 42 school districts works together – and in partnership with afterschool programs, health care providers, universities, and others – to make Western Pennsylvania a model for the future of learning.
The cohort’s collaborative approach has led to a dazzling array of awards and opportunities for students: A traditional classroom becomes a futuristic flight simulator. A school district opens its doors to therapy dogs and farm animals. Students in dual-credit classes earn college degrees before they’ve even finished high school. Thanks in part to the ecosystem’s efforts, Western Pennsylvania is now recognized as home to the largest cluster of nationally celebrated school districts in the country.
As demand for learning ecosystems continues to gather momentum, several organizations have released playbooks and white papers designed to guide policymakers, practitioners, and other interested parties. Helpful resources include:
- Too Essential to Fail: Why Our Big Bet on Public Education Needs a Bold National Response (Knowledge to Power Catalysts, 2024)
- Ecosystems for the Future of Learning (Education Reimagined and The History Co:Lab, 2023)
- Playbook – Designing Learning Ecosystems (World Innovation Summit for Education, 2022)
In addition, Remake Learning has released three publications that draw on more than twenty years of “tending soil.” The publications share methods and mindsets for navigating some of the most critical questions that face ecosystems’ stewards:
Alaska Statewide Mentor Project is Reaching Rural Teachers
Abigail Swisher, Rural Impact Fellow at FAS, served in the Office of Elementary and Secondary Education, with a focus on STEM education. This post was originally published at HomeRoom, the official blog of the U.S. Department of Education.
Spanning 37,000 miles across Alaska, the Northwest Arctic Borough School District has struggled to hire and retain enough new teachers. The eleven villages within the district – many of them above the Arctic Circle – are sparsely populated and remote. The winters are long, and without easy connection to roads, teachers new to the area often feel the isolation of remote village life.Alaska’s Northwest Arctic Borough
Early-career and out-of-state teachers tend to be most heavily concentrated in Alaska’s rural schools, where they face a steep curve in adjusting to a new way of life while learning the ropes of teaching. As Northwest Arctic Borough Superintendent Terri Walker explains, “Our new teachers really have to learn everything: a new culture, sometimes a new language, new teaching skills, a new curriculum, customs and traditions of our kids, and the culture of our schools,”
But Northwest Arctic has found one approach to help their new teachers thrive in the classroom: A mentoring program that pairs new teachers with experienced educators from across Alaska.
The Alaska Statewide Mentor Project (ASMP) connects new teachers often isolated by physical distance with experienced mentor teachers who help them learn the skills to fit their unique cultural context. Mentors and mentees connect virtually each week and in-person several times per year, which usually requires long journeys involving travel by bush plane, boat, dog sled and/or snowmobile.
Mentors help new teachers develop culturally responsive practice, building on Alaska’s statewide standards for culturally responsive teaching. Roughly seventy percent of new teachers in Alaska’s rural and isolated schools come from out of state, so the program focuses on helping teachers learn their students’ cultural context and work to integrate into their community.
Cultural knowledge is crucial for new teachers in Northwest Arctic Borough, whose student population is ninety percent Inupiaq. Superintendent Walker says that the district’s work is deeply centered in preservation of the unique heritage and values of Inupiaq culture; their motto is “Atautchikun Iñuuniałiptigun (Through Our Way of Life Together as One).”
In the 2023-24 school year, ASMP served roughly 140 new teachers across the state. Many schools share the cost of participation for their new teachers with ASMP; in previous years, Northwest Arctic Borough has used federal dollars through the Rural Education Achievement Program (REAP) to fund teachers’ participation. “It’s a very popular program with our new teachers, and one we try to continue even as our district is operating at a ten-million-dollar deficit,” said Superintendent Walker. “We continue to work to support the program because we believe in it.”
And the program is getting results: rigorous evaluation (funded by an ED Education Innovation Research grant) shows that new teachers who participate in the program make larger student learning gains in reading and math, and stay in the classroom longer than new teachers without a mentor.
The Alaska Statewide Mentor Project’s results are heartening against a larger backdrop of challenges in attracting and retaining new teachers in rural and geographically isolated schools across the United States and its territories. With an additional expansion grant from ED’s Education Innovation and Research (EIR) program, the mentoring program is broadening its reach to teachers in the state of Montana, and to expand the existing program within Alaska to all teachers who are new to the state of Alaska, regardless of their years of experience.
Ensuring the Next Generation of STEM Talent through K–12 Research Programming
Labor shortages persist in the United States in a variety of STEM (science, technology, engineering, and mathematics) fields. To address these shortages, the next administration should establish a national, federally funded initiative involving the public and private sectors to develop a more robust and diverse pipeline of STEM talent. The Next Generation of STEM Talent Through K–12 Research Programming Initiative will remove significant barriers to participation in STEM careers through enhanced K–12 STEM programs such as science fairs and robotics competitions, as well as through strengthened federal support for teacher training to actively engage K–12 students in STEM research.
Challenge and Opportunity
Need for a Stronger STEM Pipeline in the United States
The 2024 Federal Strategic Plan for Advancing STEM Education and Cultivating STEM Talent from the National Science and Technology Council (NSTC) notes that “The United States must “inspire, educate, train, and innovate in STEM fields and STEM careers, so that through unparalleled access and opportunity, the nation can leverage the full potential of its STEM talent and ensure the country’s national security, economic prosperity, and global competitiveness.” Indeed, a vigorous domestic STEM workforce that innovates quickly to confront national challenges is a central driver for economic growth. Yet while the number of degrees awarded in STEM fields has increased since 2000 in the United States, labor shortages persist in certain fields requiring STEM degrees. These fields include computer science, data science, electrical engineering, and software development.
Fostering STEM talent across the country “is critical both to enable all individuals to achieve their own aspirations in STEM fields and careers and to ready the nation to pursue new opportunities.” Yet, the rest of the world is outpacing the United States when it comes to upper-level STEM education. The United States awarded nearly 800,000 first university degrees (i.e., associate’s and bachelor’s degrees) in science and engineering (S&E) in 2016. However, the European Union (EU) top six countries (France, Germany, Italy, Poland, Spain, and the U.K., then part of the EU) produced more than 700,000 equivalent degrees—and China 1.7 million (in 2015)—around the same timeframe. In 2020, the United States came in third in terms of the most first university degrees in science and engineering (900,000), lagging behind nations such as India (2.5 million) and China (2 million).
The data are more complex but equally worrisome at the doctorate level. As of 2019, the United States no longer awards the largest number of science and engineering (S&E) doctoral degrees of any country. It was surpassed by China, with the United States awarding 42,000 and China awarding 43,000 that year. Comparisons of doctoral-degree production in the United States with doctoral-degree production in other nations need to account for the fact that a substantial number of U.S. S&E doctorate recipients are students on temporary visas. However, many of these doctorate recipients stay in the United States for jobs after obtaining their degrees. Moreover, the United States also lags peer nations when it comes to the percentage of S&E doctorates awarded out of all doctorates awarded. This figure is 44% for the United States, behind China (nearly 60%), Sweden (55%), Taiwan (53%), India (50%), and the U.K. (48%).
We as a nation must prepare by strengthening the STEM pipeline and closing the gap between demand for and supply of STEM talent. This effort must also focus on creating a diverse and inclusive STEM talent pool. Only by drawing on the talents of all its citizens can the United States effectively maintain and grow the national innovation base that supports key economic sectors. This broader participation in STEM “fosters closer alignment between societal needs and research, enhances public understanding and trust in science, facilitates uptake of research results throughout society, and supports evidence-based policymaking.”
If the United States is to keep pace and ensure continued innovation and prosperity, it must up its game on STEM education and training. Because of the time and training required to become a scientist or engineer, this effort must begin without delay. The COVID-19 pandemic emphasized the need for a robust STEM workforce. Scientists raced to discover more about the virus itself and its impact, as well as to develop vaccines and treatments safely and in record time. Engineers designed new equipment and ways to manufacture needed personal protective equipment (PPE) and ventilators. Computer scientists, statisticians, epidemiologists, and big-data scientists collaborated to make sense of pandemic data and model outcomes to inform public-health policies. Similar crises will inevitably arise in the future.
Engaging Learning Experiences with Well-Trained Educators are Even More Important Because of Pandemic Learning Losses
The coronavirus pandemic led to a significant disruption in K-12 education. Even with students back in classrooms, the negative impact of this disruption is clear and will have myriad effects on the STEM talent pipeline into the future.
Chronic absenteeism nationwide (based on students missing at least 10% of a school year) surged from 15% in 2018 to 28% in 2022, showing that post-pandemic school attendance has reduced test scores. Student attendance is instrumental to their success. As absenteeism increased, test scores declined.
This standardized test score decline is seen across the globe where middle and high school students are still struggling academically in the years since the start of the pandemic. The Program for International Student Assessment, taken by 15-year-olds, found record decreases in scores between 2018 and 2022, where math scores decreased by 15 points and reading scores by 10 points. When students have fewer math skills, it reduces the number of students likely to become STEM experts, which narrows the pool of future scientists and engineers.
Students experienced years of learning loss, along with disruption to their social and emotional development. When compared to peer nations, U.S. children are not equipped with the high-level reading, math and digital problem-solving skills needed for the fastest-growing jobs especially in a global economy that is highly competitive. The most vulnerable students are also the most negatively impacted. Gaps already present in 2019 between high-poverty and higher-income school districts increased during the pandemic and have not closed.
Launching the Next Generation of STEM Talent Through K–12 Research Programming Initiative
The next administration should launch the Next Generation of STEM Talent Through K–12 Research Programming Initiative, coordinated by the White House Office of Science and Technology Policy (OSTP) through a working group of federal agency representatives, to strengthen the STEM pipeline in the United States. The initiative would provide an additional $25 million per year for 10 years to select agencies to support K–12 research programs (such as science fairs and robotics competitions) that inspire critical thinking and encourage young people to pursue STEM careers. The new funds would also be used to train educators and community- based scientists to become K–12 research mentors, expand research programs at the local and national levels, and build an interagency tracking mechanism to coordinate and evaluate the success of these programs. These activities directly support the five interdependent pillars outlined in the Committee on STEM Education (CoSTEM) 2024 Federal Strategic Plan for Advancing STEM Education and Cultivating STEM Talent:
- STEM Engagement: Foster youth, community, and public engagement that supports inspiration and belonging, connects research and practice, and builds STEM literacy and lifelong learning.
- STEM Teaching and Learning: Improve the opportunities and outcomes for learners and educators in and across all STEM disciplines.
- STEM Workforce: Support the training and recruitment of the nation’s federal and national STEM workforce while cultivating global talent mobility and opportunity.
- STEM Research and Innovation Capacity: Drive cutting-edge STEM education research and innovation, build and advance STEM research capacity, and cultivate innovation and entrepreneurial talent development.
- STEM Environments: Remove barriers to participation and retention in STEM learning, working, and research environments
Since almost 16% of the 2.1 million federal employees in the United States occupy a STEM position, this initiative would directly benefit the Federal Government—and, by extension, U.S. civil society. Students and educators involved with this initiative would increase their awareness of Federal Government STEM occupations and develop a mental contract with participating U.S. agencies that will impact future career choices. This initiative should also involve the private sector, as many companies and their trade associations are also in need of STEM talent and lead programs that the initiative could leverage. In 2021, out of 146.4 million people ages 18 to 74 working in the United States, 34.9 million (24%) were in STEM occupations. Only the federal government has the resources and infrastructure to undertake and coordinate this public-private partnership.
Inclusivity is an indispensable aspect and opportunity of this new initiative. To foster development of STEM skills, the 2023 Progress Report on the Implementation of the Federal Science, Technology, Engineering, and Mathematics (STEM) Education Strategic Plan emphasized that “the nation must engage in a collaborative effort to ensure that everyone has access to high quality STEM education throughout their lifetimes.” Access to STEM education and representation in STEM fields is unequally distributed in the United States. Women, differently abled persons, and three ethnic or racial groups—Blacks or African Americans, Hispanics or Latinos, and American Indians or Alaska Natives—are significantly underrepresented in science and engineering education and employment. In 2021, a greater share of men (29%) than women (18%) worked in STEM occupations, even though men and women represented similar proportions of the total workforce (52% men and 48% women). Similarly, Blacks/African Americans and Hispanics/Latinos make up about 28% of the overall population but only 13% of the STEM workforce. Research suggests there are many individuals—especially women, minorities, and children from low-income families—who would have developed highly impactful inventions had they been exposed to innovation in childhood. The Next Generation of STEM Talent Through K–12 Research Programming Initiative is designed to help find those “lost Einsteins”.
There also will be an emphasis placed on rural students who do not have adequate mentors and educational systems currently in place. Studies have shown that underserved minority and rural communities often do not have access to the same educational opportunities as more affluent white communities, and this impacts the careers they will pursue. The pandemic exposed the enormous gaps between the country’s poorest and wealthiest schools around access to basic technology and live remote instruction, as well as the percentages of students who teachers report were not logging in or making contact.
The Federal Strategic Plan for Advancing STEM Education and Cultivating STEM Talent cites one of its pillars as STEM Research and Innovation Capacity. Informal learning, especially participation in research programs such as science fairs or robotic competitions, is one way to inspire critical thinking in young people and foster long-term interest in STEM. Research funded by the National Science Foundation shows that participating in a science research project increases student interest in STEM careers. These competitions provide students with opportunities to create solutions to real-life problems, encouraging innovation, which is a critical component of economic growth and entrepreneurial talent development.
There is flexibility in how opportunities are delivered to students. When schools were shut down in 2020-2021, the Society for Science converted its STEM Research Grants program, an opportunity for teachers to receive up to $5,000 for classroom resources and/or transportation to research sites, to include STEM Research kits full of resources that students were able to bring home to complete STEM research outside of school. The Society for Science has continued to provide home and school options for the resources teachers receive from this program. Relatedly, the Society for Science launched a new Research at Home website to support this work.
No matter if the vehicle for delivery is from educators providing materials to be used at home or at school, success in this area requires training teachers to be effective research mentors. In line with CoSTEM’s Federal Objective for Training STEM Educators, an excellent prototype for such training is the Research Teachers Conferences run by the Society for Science. The Research Teachers Conferences convene high-school and middle school STEM research teachers annually to share best practices, troubleshoot challenges, and establish a network of support for each other. Nearly 2,000 teachers each year request the opportunity to attend these conferences, but funding for 2024 was only available for 275—highlighting the pent-up demand for STEM research training. More training is also needed to help professional scientists become more effective research mentors for K–12 students, and they, too, need training to ensure optimal effectiveness. The Next Generation of STEM Talent Through K–12 Research Programming Initiative is designed to train a collaborative community of K–12 research mentors working throughout the United States.
There are already many hands-on programs designed to increase the STEM talent pool by providing research-based and problem-solving learning opportunities to K–12 students: especially underrepresented minorities, girls, or students from rural communities. These programs range in size from small to large and in scope from local to federal. Programs are run by institutions such as nonprofit organizations, colleges and universities, scientific societies, and even industry trade associations. For example, the American Chemical Society has provided economically disadvantaged high-school students with paid summer-research internships for more than 50 years. Students participating in the internship program work under the guidance of professional scientists who have been trained to be research mentors. The Society for Science’s Advocate Program provides mentors to support underserved students in submitting research projects to science competitions. Funding for these types of K–12 STEM programs comes from a myriad of sources, including philanthropic foundations and individuals, companies, and local, state, and federal governments. But there is currently no widespread coordination among these programs or sharing of best practices. There is also little rigorous evaluation to determine program success. The Next Generation of STEM Talent Through K–12 Research Programming Initiative will provide leadership to align complementary efforts and additional funding to support assessment and scale-up of practices proven effective.
Only the federal government has the ability to accomplish the three objectives outlined above. But as the 2024 STEM Plan states, “the federal government alone cannot produce the STEM talent needed for the entire country. Multi-agency and multi-sector partnerships and ecosystem development, including with international counterparts, are necessary to achieve a vision for STEM in America.”
Plan of Action
The Next Generation of STEM Talent Through K–12 Research Programming Initiative should have four major components:
Component 1. White House leadership, coordination, tracking, and evaluation
The next president should sign an Executive Order (EO) launching a national Next Generation of STEM Talent Through K–12 Research Programming Initiative led by the White House Office of Science and Technology Policy (OSTP). The initiative would oversee and strengthen federal support for teacher training and program development designed to actively engage students in STEM research and problem-solving.
The EO should also establish an OSTP-led working group like the Committee on STEM Education (CoSTEM), the NSTC group that wrote Charting a Course for Success: America’s Strategy for STEM Education. CoSTEM – with its mandate to review STEM education programs, investments, and activities, and the respective assessments of each, in federal agencies to ensure that they are effective – serves as a model for this initiative. While CoSTEM coordinates the interagency working groups focused on different aspects of STEM, particularly the Interagency Working Group to Engage Students where Disciplines Converge (IWGC) and the Interagency Working Group to Develop and Enrich Strategic Partnerships (SP-IWG), this new working group would coordinate relevant activities across federal agencies and their subunits, with the goal to gather the leading scientists, administrators and educators doing this work outside of federal agencies, leveraging the organizational power of the federal government to provide the resources and infrastructure to coordinate this public-private partnership.
While some federal agencies already have directly relevant programs in place, other agencies could help identify offices and programs essential to the initiative’s success. The working group should issue an open call for nonprofit organizations with expertise in research-based STEM learning and teacher/mentor training to participate as advisors to the working group. The working group could also include representatives from existing programs that help expand research-based and problem-solving STEM experiences at the K–12 level. The EO should task the working group with developing a strategic national action plan that includes metrics to monitor the initiative’s success, as well as with creating a centralized database that can track, monitor, and evaluate programs funded by the initiative. The working group should periodically report to the Executive Office of the President on the initiative’s progress.
Overall goals of the initiative would be to:
- Ensure an abundance of qualified applicants from a variety of backgrounds—including variety in gender, race, or socioeconomic status—for all STEM jobs in the United States.
- Train teachers to provide students with research-based STEM education opportunities throughout their K–12 education experiences
- Create a comprehensive database to track programs (and their participants) aligned with and/or funded by the initiative.
- Rigorously and fairly evaluate programs aligned with and/or funded by the initiative, quickly communicating evaluation findings in ways that help programs adjust to best serve students and educators.
Quantitative targets to assess progress towards these goals include:
- Improving the extent to which demographics of applicants to STEM jobs in the United States reflect demographics of the United States as a whole.
- Availability of project-based STEM learning at publicly funded K–12 schools, as well as student access to opportunities (e.g., science fairs) where they can share the results of their work.
- Grow the pool of qualified STEM research educators to 100,000 in the next 10 years, so that all schools have access to the needed number of trained educators.
Component 2. Federal budget commitments
A few agencies—such as the National Aeronautics and Space Administration (NASA), the National Security Agency (NSA), and the Department of Defense (DoD)—currently directly support aspects of this initiative. Yet at least 20 federal agencies (full list found in FAQ) and their subunits have a clear stake in developing the STEM workforce and hiring STEM graduates.
Each of these federal units will need dedicated funding to support the initiative, including by:
- Partnering with established nonprofit organizations, community colleges and universities to offer training and grants to support teachers and community scientists in becoming effective K–12 research and project mentors.
- Providing monetary awards and increased recognition opportunities for students who participate in STEM competitions.
- Executing an annual gathering on the topic of developing STEM talent through K-12 research programming where federal agencies and non-federal organizations make commitments and, starting in year two, present progress to meeting commitments and goals.
- Building an agency-wide mechanism to track outcomes of and build alumni networks for students who participate in initiative programs. Coordinate communication and additional opportunities for these alumni to ensure they remain connected and to leverage their leadership in this initiative.
- Maintaining a central database to track metrics of initiative programs.
- Recruiting exceptional STEM students into the professional federal STEM pipeline.
We estimate that an average allocation of $25 million per year for 10 years per relevant federal unit would be sufficient to get the initiative off the ground. These funds alone are not enough to develop the STEM workforce to the level needed in the United States. However, consistent federal funding for K–12 research programming (and associated teacher training) would provide a solid foundation for addressing the shortfalls outlined at the beginning of this memo. To maximize the initiative’s impact, additional funding should be allocated specifically for coordination and evaluation. Evaluations should be carried out every few years, and findings used to inform funding priorities and program structure as needed. Emphasis should be placed on allocating funds to expand access to high-quality STEM experiences for underserved and underrepresented students.
Component 3. Meaningful agency participation
The working group will identify existing federal programs that could be expanded to achieve the initiative’s goal. The working group will also identify agencies that have relevant missions but currently lack relevant programs.
Component 4. Partnership with non-federal organizations to provide programmatic content and complementary actions
The working group should partner with third-party organizations that already offer programs and resources (financial and in-kind) relevant to the initiative. These include but are not limited to:
- School-based programs.
- STEM nonprofit organizations that deliver curricula for teacher training.
- Scientific societies and trade associations.
- Post-secondary institutions such as community colleges, colleges, and universities.
- Private-sector companies.
- State and local governments.
- Philanthropic foundations organizations and individuals.
The working group itself should aim to have representatives from underrepresented groups in STEM to ensure a wide variety of voices are represented as part of the leadership of this initiative.
The next administration can use the power of the federal government to help such third-party organizations scale up and strengthen programs that have already proven effective, resulting in more teachers and scientists trained and more K–12 students able to participate in science and engineering research projects.
Precedents
The initiative outlined in this memo can and should build on multiple outstanding federal precedents. One example is the DoD’s Defense STEM Education Consortium (DSEC). The DSEC is a collaborative partnership among academia, industry, nonprofit organizations, and government that aims to broaden STEM literacy and develop a diverse and agile workforce with the technical excellence to defend our nation. Many smaller federal programs provide teacher training in various STEM fields. The next administration should leverage and potentially refocus such existing programs to emphasize critical-thinking skills and research-based programs at the K–12 level.
Conclusion
The next administration should seize the opportunity to reinvigorate the STEM talent pool in the United States by creating the Next Generation of STEM Talent Through K–12 Research Programming Initiative. The initiative will motivate participation in STEM careers by making participation in hands-on STEM research and problem-solving opportunities a standard component of K–12 education. Failure to replenish and grow our domestic STEM talent pool will lead to a decline in national innovation and economic progress and an inability to meet the moment in future times of challenge, such as a next pandemic. Only the federal government can address this need at the scale and pace needed.
Inclusivity is an indispensable aspect of this initiative. Building a robust STEM workforce in the United States requires us as a nation to draw on the talent of all Americans. The Next Generation of STEM Talent Through K–12 Research Programming Initiative will rediscover our country’s ”lost Einsteins”: the underrepresented minorities, women and underserved students from rural communities who have the capacity to deliver transformative contributions to STEM if only they were provided opportunities to do so.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.
The current and previous administrations have taken multiple actions that serve as a foundation for achieving the goals of Ensuring the Next Generation of STEM Talent Through K-12 Research Programming. “Agencies across the federal government are united in their commitment to developing STEM talent so that all individuals and communities can grow, aspire, and thrive, allowing the United States to reach its full potential.” While there are groups within the Federal government that are doing similar work, like the Committee on STEM Education (CoSTEM) with its mandate to review science, technology, engineering, and mathematics (STEM) education programs, investments, and activities, and the respective assessments of each, in federal agencies to ensure that they are effective, this initiative is unique in that the main leadership would be a coalition of leaders and organizations outside of the government, with a government agency like OSTP coordinating, rather than CoSTEM’s focus on interagency working groups (IWGs). CoSTEM serves as a model, with the goal to gather the best of the best who are doing this work outside of federal agencies and leverage the organizational power of the government to provide the resources and infrastructure to coordinate this public-private partnership.
An outstanding example of a federal initiative that works is DoD STEM’s Defense STEM Education Consortium (DSEC). Aligned to the Federal STEM Education Strategic plan, the Defense Science, Technology, Engineering, and Mathematics Education Consortium (DSEC) is a collaborative partnership among academia, industry, non-for-profit organizations, and government that aims to broaden STEM literacy and develop a diverse and agile workforce with the technical excellence to defend our Nation. By addressing and prioritizing critical STEM challenges, DoD is investing in evidence-based approaches to inspire and develop the Nation’s science and technology workforce.
This multi-year effort includes elements focused on STEM enrichment programs for students and educators, STEM workforce engagement, program evaluation, and public outreach. These efforts will allow DoD to improve access for students to pursue STEM careers and consider Defense laboratories as a place of employment. Through strategic investment in STEM education and outreach activities, the effort will provide students with more exposure to educational and career opportunities, as well as DoD research. The program includes scholarships, internships/apprenticeships, teacher training, and conferences.
Yes. One example that could be adapted for K-12 are the existing programs funded by the National Science Foundation, which provides summer research experiences nationally and internationally to college students. Those programs involve partnerships with universities and non-for-profit scientific societies.
In 2022, the Department of Education launched its YOU Belong in STEM initiative to strengthen STEM education nationwide by implementing and scaling high-quality PreK through university STEM education for all, which is well aligned with the goals of this proposal. Similar programs exist in many other federal agencies, but they are not coordinated nor specifically directed to building the STEM pipeline.
Although OSTP is the obvious coordinating group, in the event it could not undertake a project of this size, the National Science Foundation in concert with the Department of Education, given its YOU Belong in STEM initiative, would be an appropriate coordinator.
A few agencies—such as the National Aeronautics and Space Administration (NASA), the National Security Agency (NSA), and the Department of Defense (DoD)—currently directly support aspects of this initiative. Yet at least 20 federal agencies and their subunits have a clear stake in developing the STEM workforce and hire STEM graduates.
While there are dozens if not hundreds of organizations doing similar types of programs, they are underfunded, uncoordinated, and under-evaluated. They are not uniformly distributed throughout the U.S., and their goals are also diffuse. Only the federal government is positioned to create the umbrella to coordinate such programs, track them, and evaluate them.
The OSTP working group would need to prioritize the number of federal agencies and choose those that have the most at stake from this proposal—agencies that specifically need STEM workforce to carry out their mission. Narrowed in this way, the number of units might drop by 50%, from 20 to 10. The working group could also focus on those agencies that currently have robust programs in the same space and build on those. For example, more money might be given to the DSEC program.
In addition, funding should be continued and increased for the National Science Foundation to evaluate these programs from a rigorous viewpoint to determine whether they are succeeding. Continued funding would be dependent on the results of these evaluations.
Establishing a Cyber Workforce Action Plan
The next presidential administration should establish a comprehensive Cyber Workforce Action Plan to address the critical shortage of cybersecurity professionals and bolster national security. This plan encompasses innovative educational approaches, including micro-credentials, stackable certifications, digital badges, and more, to create flexible and accessible pathways for individuals at all career stages to acquire and demonstrate cybersecurity competencies.
The initiative will be led by the White House Office of the National Cyber Director (ONCD) in collaboration with key agencies such as the Department of Education (DoE), Department of Homeland Security (DHS), National Institute of Standards and Technology (NIST), and National Security Agency (NSA). It will prioritize enhancing and expanding existing initiatives—such as the CyberCorps: Scholarship for Service program that recruits and places talent in federal agencies—while also spearheading new engagements with the private sector and its critical infrastructure vulnerabilities. To ensure alignment with industry needs, the Action Plan will foster strong partnerships between government, educational institutions, and the private sector, particularly focusing on real-world learning opportunities.
This Action Plan also emphasizes the importance of diversity and inclusion by actively recruiting individuals from underrepresented groups, including women, people of color, veterans, and neurodivergent individuals, into the cybersecurity workforce. In addition, the plan will promote international cooperation, with programs to facilitate cybersecurity workforce development globally. Together, these efforts aim to close the cybersecurity skills gap, enhance national defense against evolving cyber threats, and position the United States as a global leader in cybersecurity education and workforce development.
Challenge and Opportunity
The United States and its allies face a critical shortage of cybersecurity professionals, in both the public and private sectors. This shortage poses significant risks to our national security and economic competitiveness in an increasingly digital world.
In the federal government, the cybersecurity workforce is aging rapidly, with only about 3% of information technology (IT) specialists under 30 years old. Meanwhile, nearly 15% of the federal cyber workforce is eligible for retirement. This demographic imbalance threatens the government’s ability to defend against sophisticated and evolving cyber threats.
The private sector faces similar challenges. According to recent estimates, there are nearly half a million unfilled cybersecurity positions in the United States. This gap is expected to grow as cyber threats become more complex and pervasive across all industries. Small and medium-sized businesses are particularly vulnerable, often lacking the resources to compete for scarce cyber talent.
The cybersecurity talent shortage extends beyond our borders, affecting our allies as well. As cyber threats from adversarial nation states become increasingly global in nature, our international partners’ ability to defend against these threats directly impacts U.S. national security. Many of our allies, particularly in Eastern Europe and Southeast Asia, lack robust cybersecurity education and training programs, further exacerbating the global skills gap.
A key factor contributing to this shortage is the lack of accessible, flexible pathways into cybersecurity careers. Traditional education and training programs often fail to keep pace with rapidly evolving technology and threat landscapes. Moreover, they frequently overlook the potential of career changers and nontraditional students who could bring valuable diverse perspectives to the field.
However, this challenge presents a unique opportunity to revolutionize cybersecurity education and workforce development. By leveraging innovative approaches such as apprenticeships, micro-credentials, stackable certifications, peer-to-peer learning platforms, digital badges, and competition-based assessments, we can create more agile and responsive training programs. These methods can provide learners with immediately applicable skills while allowing for continuous upskilling as the field evolves.
Furthermore, there’s an opportunity to enhance cybersecurity awareness and basic skills among all American workers, not just those in dedicated cyber roles. As digital technologies permeate every aspect of modern work, a baseline level of cyber hygiene and security consciousness is becoming essential across all sectors.
By addressing these challenges through a comprehensive Cyber Workforce Action Plan, we can not only strengthen our national cybersecurity posture but also create new pathways to well-paying, high-demand jobs for Americans from all backgrounds. This initiative has the potential to position the United States as a global leader in cyber workforce development, enhancing both our national security and our economic competitiveness in the digital age.
Evidence of Existing Initiatives
While numerous excellent cybersecurity workforce development initiatives exist, they often operate in isolation, lacking cohesion and coordination. ONCD is positioned to leverage its whole-of-government approach and the groundwork laid by its National Cyber Workforce and Education Strategy (NCWES) to unite these disparate efforts. By bringing together the strengths of various initiatives and their stakeholders, ONCD can transform high-level strategies into concrete, actionable steps. This coordinated approach will maximize the impact of existing resources, reduce duplication of efforts, and create a more robust and adaptable cybersecurity workforce development ecosystem. This proposed Action Plan is the vehicle to turn these collective workforce-minded strategies into tangible, measurable outcomes.
At the foundation of this plan lies the NICE Cybersecurity Workforce Framework, developed by NIST. This common lexicon for cybersecurity work roles and competencies provides the essential structure upon which we can build. The Cyber Workforce Action Plan seeks to expand on this foundation by creating standardized assessments and implementation guidelines that can be adopted across both public and private sectors.
Micro-credentials, stackable certifications, digital badges, and other innovations in accessible education—as demonstrated by programs like SANS Institute’s GIAC certifications and CompTIA’s offerings—form a core component of the proposed plan. These modular, skills-based learning approaches allow for rapid validation of specific competencies—a crucial feature in the fast-evolving cybersecurity landscape. The Action Plan aims to standardize and coordinate these and similar efforts, ensuring widespread recognition and adoption of accessible credentials across industries.
The array of gamification and competition-based learning approaches—including but not limited to National Cyber League, SANS NetWars, and CyberPatriot—are also exemplary starting points that would benefit from greater federal engagement and coordination. By formalizing these methods within education and workforce development programs, the government can harness their power to simulate real-world scenarios and drive engagement at a national scale.
Incorporating lessons learned from the federal government’s previous DoE CTE CyberNet program, the National Science Foundation’s (NSF) Scholarship for Service Program (SFS), and the National Security Agency’s (NSA) GenCyber camps—the Action Plan emphasizes the importance of early engagement (the middle grades and early high school years) and practical, hands-on learning experiences. By extending these principles across all levels of education and professional development, we can create a continuous pathway from high school through to advanced career stages.
A Cyber Workforce Action Plan would provide a unifying praxis to standardize competency assessments, create clear pathways for career progression, and adapt to the evolving needs of both the public and private sectors. By building on the successes of existing initiatives and introducing innovative solutions to fill critical gaps in the cybersecurity talent pipeline, we can create a more robust, diverse, and skilled cybersecurity workforce capable of meeting the complex challenges of our digital future.
Plan of Action
Recommendation 1. Create a Cyber Workforce Action Plan.
ONCD will develop and oversee the plan, in close collaboration with DoE, NIST, NSA, and other relevant agencies. The plan has three distinct components:
1. Develop standardized assessments aligned with the NICE framework. ONCD will work with NIST to create a suite of standardized assessments to evaluate cybersecurity competencies that:
- Cover the full range of knowledge, skills, and abilities defined in the NICE framework.
- Include both theoretical knowledge tests and practical, scenario-based evaluations.
- Be regularly updated to reflect evolving cybersecurity threats and technologies.
- Be designed with input from both government and industry cybersecurity professionals to ensure relevance and applicability.
2. Establish a system of stackable and portable micro-credentials. To provide flexible and accessible pathways into cybersecurity careers, ONCD will work with DoE, NIST, and the private sector to help develop and support systems of micro-credentials that are:
- Aligned with specific competencies in the NICE framework: NIST, as the national standards-setting body, will issue these credentials to ensure alignment with the NICE framework. This will provide legitimacy and broad recognition across industries.
- Stackable, allowing learners to build towards larger certifications or degrees: These credentials will be designed to allow individuals to accumulate certifications over time, ultimately leading to more comprehensive qualifications or degrees.
- Portable across different sectors and organizations: The micro-credentials will be recognized by both government agencies and private-sector employers, ensuring they have value regardless of where an individual seeks employment.
- Recognized and valued by both government agencies and private-sector employers: By working closely with the private sector—where credentialing systems like those from CompTIA and Google are already advanced—the ONCD will help ensure government-issued credentials are not duplicative but complementary to existing industry standards. NIST’s involvement, combined with input from private-sector leaders, will provide confidence that these credentials are relevant and accepted in both public and private sectors.
- Designed to facilitate rapid upskilling and reskilling in response to evolving cybersecurity needs: Given the rapidly changing landscape of cybersecurity threats, these micro-credentials will be regularly updated to reflect the most current technologies and skills, enabling professionals to remain agile and competitive.
3. Integrate more closely with more federal initiatives. The Action Plan will be integrated with existing federal cybersecurity programs and initiatives, including:
- DHS’s Cybersecurity Talent Management System
- DoD’s Cyber Excepted Service
- NIST’s NICE framework
- NSF’s CyberCorps SFS program
- NSA’s GenCyber camps
This proposal emphasizes stronger integration with existing federal initiatives and greater collaboration with the private sector. Instead of creating entirely new credentialing standards, ONCD will explore opportunities to leverage widely adopted commercial certifications, such as those from Google, CompTIA, and other private-sector leaders. By selecting and promoting recognized commercial standards where applicable, ONCD can streamline efforts, avoiding duplication and ensuring the cybersecurity workforce development approach is aligned with what is already successful in industry. Where necessary, ONCD will work with NIST and industry professionals to ensure these commercial certifications meet federal needs, creating a more cohesive and efficient approach across both government and industry. This integrated public-private strategy will allow ONCD to offer a clear leadership structure and accountability mechanism while respecting and utilizing commercial technology and standards to address the scale and complexity of the cybersecurity workforce challenge.
The Cyber Workforce Action Plan will emphasize strong collaborations with the private sector, including the establishment of a Federal Cybersecurity Curriculum Advisory Board composed of experts from relevant federal agencies and leading private-sector companies. This board will work directly with universities to develop model curricula that incorporate the latest cybersecurity tools, techniques, and threat landscapes, ensuring that graduates are well-prepared for the specific challenges faced by both federal and private-sector cybersecurity professionals.
To provide hands-on learning opportunities, the Action Plan will include a new National Cyber Internship Program. Managed by the Department of Labor in partnership with DHS’s Cybersecurity and Infrastructure Security Agency (CISA) and leading technology companies, the program will match students with government agencies and private-sector companies. An online platform will be developed, modeled after successful programs like Hacking for Defense, where industry partners can propose real-world cybersecurity projects for student teams.
To incentivize industry participation, the General Services Administration (GSA) and DoD will update federal procurement guidelines to require companies bidding on cybersecurity-related contracts to certify that they offer internship or early-career opportunities for cybersecurity professionals. Additionally, CISA will launch a “Cybersecurity Employer of Excellence” certification program, which will be a prerequisite for companies bidding on certain cybersecurity-related federal contracts.
The Action Plan will also address the global nature of cybersecurity challenges by incorporating international cooperation elements. This includes adapting the plan for international use in strategically important regions, facilitating joint training programs and professional exchanges with allied nations, and promoting global standardization of cybersecurity education through collaboration with international standards organizations.
Ultimately, this effort intends to implement a national standard for cybersecurity competencies—providing clear, accessible pathways for career progression and enabling more agile and responsive workforce development in this critical field.
Recommendation 2. Implement an enhanced CyberCorps fellowship program.
ONCD should expand the NSF’s CyberCorps Scholarship for Service program as an immediate, high-impact initiative. Key features of the expanded CyberCorps fellowship program include:
1. Comprehensive talent pipeline: While maintaining the current SFS focus on students, the enhanced CyberCorps will also target recent graduates and early-career professionals with 1–5 years of work experience. This expansion addresses immediate workforce needs while continuing to invest in future talent. The program will offer competitive salaries, benefits, and loan forgiveness options to attract top talent from both academic and private-sector backgrounds.
2. Multiagency exposure and optional rotations: While cross-sector exposure remains valuable for building a holistic understanding of cybersecurity challenges, the rotational model will be optional or limited based on specific agency needs. Fellows may be offered the opportunity to rotate between agencies or sectors only if their skill set and the hosting agency’s environment are conducive to short-term placements. For fellows placed in agencies or sectors where longer ramp-up times are expected, a deeper, longer-term placement may be more effective. Drawing on lessons from the Presidential Innovation Fellows and the U.S. Digital Corps, the program will emphasize flexibility to ensure that fellows can make meaningful contributions within the time frame and that knowledge transfer between sectors remains a core objective.
3. Advanced mentorship and leadership development: Building on the SFS model, the expanded program will foster a strong community of cyber professionals through cohort activities and mentorship pairings with senior leaders across government and industry. A new emphasis on leadership training will prepare fellows for senior roles in government cybersecurity.
4. Focus on emerging technologies: Complementing the SFS program’s core cybersecurity curriculum, the expanded CyberCorps will emphasize cutting-edge areas such as artificial intelligence in cybersecurity, quantum computing, and advanced threat detection. This focus will prepare fellows to address future cybersecurity challenges.
5. Extended impact through education and mentorship: The program will encourage fellows to become cybersecurity educators and mentors in their communities after their service, extending the program’s impact beyond government service and strengthening America’s overall cyber workforce.
By implementing these enhancements to the CyberCorps program as a first step and quick win, the Action Plan will initiate a more comprehensive approach to federal cybersecurity workforce development. The enhanced CyberCorps fellowship program will also emphasize diversity and inclusion to address the critical shortage of cybersecurity professionals and bring fresh perspectives to cyber challenges. The program will actively recruit individuals from underrepresented groups, including women, people of color, veterans, and neurodivergent individuals.
To achieve this, the program will partner with organizations like Girls Who Code and the Hispanic IT Executive Council to promote cybersecurity careers and expand the applicant pool. The Department of Labor, in conjunction with the NSF, will establish a Cyber Opportunity Fund to provide additional scholarships and grants for individuals from underrepresented groups pursuing cybersecurity education through the CyberCorps program.
In addition, the program will develop standardized apprenticeship components that provide on-the-job training and clear pathways to full-time employment, with a focus on recruiting from diverse industries and backgrounds. Furthermore, partnerships with Historically Black Colleges and Universities, Hispanic-Serving Institutions, and Tribal Colleges and Universities will be strengthened to enhance their cybersecurity programs and create a pipeline of diverse talent for the CyberCorps program.
The CyberCorps program will expand its scope to include an international component, allowing for exchanges with allied nations’ cybersecurity agencies and bringing international students to U.S. universities for advanced studies. This will help position the United States as a global leader in cybersecurity education and training while fostering a worldwide community of professionals capable of responding effectively to evolving cyber threats.
By incorporating these elements, the enhanced CyberCorps fellowship program will not only address immediate federal cybersecurity needs but also contribute to building a diverse, skilled, and globally aware cybersecurity workforce for the future.
Implementation Considerations
To successfully establish and execute the comprehensive Action Plan and its associated initiatives, careful planning and coordination across multiple agencies and stakeholders will be essential. Below are some of the key timeline and funding considerations the ONCD should factor into its implementation.
Key milestones and actions for the first two years
Months 1–6:
- Create the Cyber Workforce Action Plan as a roadmap to implementing ONCD’s NCWES.
- Form interagency working group and private-sector advisory board.
- NIST’s Information Technology Laboratory, in collaboration with industry partners, will begin the development of the standardized assessment system and micro-credentials framework.
- Initiate the Federal Cybersecurity Curriculum Advisory Board.
- Launch the expanded CyberCorps fellowship program recruitment.
Months 7–12:
- Implement pilot programs for standardized assessments and micro-credentials.
- Begin first cohort of expanded CyberCorps fellows.
- Launch diversity and inclusion initiatives, including the “Cyber for All” awareness campaign.
- Initiate the National Cybersecurity Internship Program.
- Begin development of the Cybersecurity Employer of Excellence recognition program.
Months 13–18:
- Pilot standardized assessments and micro-credentials system in select agencies and educational institutions, with full rollout anticipated after evaluation and adjustments based on feedback.
- Expand CyberCorps program and university partnerships.
- Implement private-sector internship and project-based learning programs.
- Launch the International Cybersecurity Workforce Alliance.
Months 19–24:
- Implement tax incentives for industry participation in workforce development.
- Establish the Cybersecurity Development Fund for international capacity building.
- Conduct first annual review of diversity and inclusion metrics in federal cyber workforce.
Program evaluation and quality assurance
Beyond these key milestones, the Action Plan must establish clear evaluation frameworks to ensure program quality and effectiveness, particularly for integrating non-federal education programs into federal hiring pathways. For example, to address OPM’s need for evaluating non-federal technical and career education programs under the Recent Graduates Program, the Action Plan will implement the following evaluation framework:
- Alignment with NICE framework competencies (minimum 80% coverage of core competencies)
- Completion of NIST-approved standardized technical assessments
- Documentation of supervised practical experience (minimum 400 hours)
- Evidence of quality assurance processes comparable to registered apprenticeship programs
- Regular curriculum updates (minimum annually) to reflect current security threats
- Industry partnership validation through the Cybersecurity Employer of Excellence program
The implementation of these criteria will be overseen by the same advisory board established in Months 1-6, expanding their scope to include program evaluation and certification. This approach leverages existing governance structures while providing OPM with quantifiable metrics to evaluate non-federal program graduates.
Budgetary, resource, and personnel needs
The estimated annual budget for the proposed initiative ranges from $125 million to $200 million. This range considers cost-effective resource allocation strategies, including the integration of existing platforms and focused partnerships. Key components of the program include:
- Staffing: A core team of 15–20 full-time employees will oversee the centralized program office, focusing on high-level coordination and oversight. Specialized tasks such as curriculum development and assessment design will be contracted to external partners, reducing the need for a larger in-house team.
- IT infrastructure: Rather than building new systems from scratch, the initiative will use existing platforms and credentialing technologies from private-sector providers (e.g., CompTIA, Coursera). This significantly reduces upfront development costs while ensuring a robust system for managing assessments and credentials.
- Marketing and outreach: A smaller but targeted budget will be allocated for domestic and international outreach to raise awareness of the program. Partnerships with industry and educational institutions will help amplify these efforts, reducing the need for a large marketing budget.
- Grants and partnerships: The program will provide modest grants to universities to support curriculum development, with a focus on fostering partnerships rather than large-scale financial commitments. This allows for more cost-effective collaboration with educational institutions.
- Fellowship programs and international exchanges: The expanded CyberCorps fellowship will begin with a smaller cohort, scaling up based on available funding and demonstrated success. International exchanges will be limited to strategic, high-impact partnerships to ensure cost efficiency while still addressing global cybersecurity needs.
Potential funding sources
Funding for this initiative can be sourced through a variety of channels. First, congressional appropriations via the annual budget process are expected to provide a significant portion of the financial support. Additionally, reallocating existing funds from cybersecurity and workforce development programs could account for approximately 25–35% of the overall budget. This reallocation could include funding from current programs like NICE, SFS, and other workforce development grants, which can be repurposed to support this broader initiative without requiring entirely new appropriations.
Public-private partnerships will also be explored, with potential contributions from industry players who recognize the value of a robust cybersecurity workforce. Grants from federal entities such as DHS, DoD, and NSF are viable options to supplement the program’s financial needs. To offset costs, fees collected from credentialing and training programs could serve as an additional revenue stream.
Finally, the Action Plan and its initiatives will seek contributions from international development funds aimed at capacity-building, as well as financial support from allied nations to aid in the establishment of joint international programs.
Conclusion
Establishing a comprehensive Cyber Workforce Action Plan represents a pivotal move toward securing America’s digital future. By creating flexible, accessible career pathways into cybersecurity, fostering innovative education and training models, and promoting both domestic diversity and international cooperation, this initiative addresses the urgent need for a skilled and resilient cybersecurity workforce.
The impact of this proposal is wide-ranging. It will not only reinforce national security by strengthening the nation’s cyber defenses but also contribute to economic growth by creating high-paying jobs and advancing U.S. leadership in cybersecurity on the global stage. By expanding access to cybersecurity careers and engaging previously underutilized talent pools, this initiative will ensure the workforce reflects the diversity of the nation and is prepared to meet future cybersecurity challenges.
The next administration must make the implementation of this plan a national priority. As cyber threats grow more complex and sophisticated, the nation’s ability to defend itself depends on developing a robust, adaptable, and highly skilled cybersecurity workforce. Acting swiftly to establish this strategy will build a stronger, more resilient digital infrastructure, ensuring both national security and economic prosperity in the 21st century. We urge the administration to allocate the necessary resources and lead the transformation of cybersecurity workforce development. Our digital future—and our national security—demand immediate action.
Teacher Education Clearinghouse for AI and Data Science
The next presidential administration should develop a teacher education and resource center that includes vetted, free, self-guided professional learning modules, resources to support data-based classroom activities, and instructional guides pertaining to different learning disciplines. This would provide critical support to teachers to better understand and implement data science education and use of AI tools in their classroom. Initial resource topics would be:
- An Introduction to AI, Data Literacy, and Data Science
- AI & Data Science Pedagogy
- AI and Data Science for Curriculum Development & Improvement
- Using AI Tools for Differentiation, Assessment & Feedback
- Data Science for Ethical AI Use
In addition, this resource center would develop and host free, pre-recorded, virtual training sessions to support educators and district professionals to better understand these resources and practices so they can bring them back to their contexts. This work would improve teacher practice and cut administrative burdens. A teacher education resource would lessen the digital divide and ensure that our educators are prepared to support their students in understanding how to use AI tools so that each and every student can be college and career ready and competitive at the global level. This resource center would be developed using a process similar to the What Works Clearinghouse, such that it is not endorsing a particular system or curriculum, but is providing a quality rating, based on the evidence provided.
Challenge and Opportunity
AI is an incredible technology that has the power to revolutionize many areas, especially how educators teach and prepare the next generation to be competitive in higher education and the workforce. A recent RAND study showed leaders in education indicating promise in adapting instructional content to fit the level of their students and for generating instructional materials and lesson plans. While this technology holds a wealth of promise, the field has developed so rapidly that people across the workforce do not understand how best to take advantage of AI-based technologies. One of the most crucial areas for this is in education. AI-enabled tools have the potential to improve instruction, curriculum development, and assessment, but most educators have not received adequate training to feel confident using them in their pedagogy. In a Spring 2024 pilot study (Beiting-Parrish & Melville, in preparation), initial results indicated that 64.3% of educators surveyed had not had any professional development or training in how to use AI tools. In addition, more than 70% of educators surveyed felt they did not know how to pick AI tools that are safe for use in the classroom, and that they were not able to detect biased tools. Additionally, the RAND study indicated only 18% of educators reported using AI tools for classroom purposes. Within those 18%, approximately half of those educators used AI because they had been specifically recommended or directly provided a tool for classroom use. This suggests that educators need to be given substantial support in choosing and deploying tools for classroom use. Providing guidance and resources to support vetting tools for safe, ethical, appropriate, and effective instruction is one of the cornerstone missions of the Department of Education. This education should not rest on the shoulders of individual educators who are known to have varying levels of technical and curricular knowledge, especially for veteran teachers who have been teaching for more than a decade.
If the teachers themselves do not have enough professional development or expertise to select and teach new technology, they cannot be expected to thoroughly prepare their students to understand emerging technologies, such as AI, nor the underpinning concepts necessary to understand these technologies, most notably data science and statistics. As such, students’ futures are being put at risk from a lack of emphasis in data literacy that is apparent across the nation. Recent results from the National Assessment of Education Progress (NAEP), assessment scores show a shocking decline in student performance in data literacy, probability, and statistics skills – outpacing declines in other content areas. In 2019, the NAEP High School Transcript Study (HSTS) revealed that only 17% of students completed a course in statistics and probability, and less than 10% of high school students completed AP Statistics. Furthermore, the HSTS study showed that less than 1% of students completed a dedicated course in modern data science or applied data analytics in high school. Students are graduating with record-low proficiency in data, statistics, and probability, and graduating without learning modern data science techniques. While students’ data and digital literacy are failing, there is a proliferation of AI content online; they are failing to build the necessary critical thinking skills and a discerning eye to determine what is real versus what has been AI-generated, and they aren’t prepared to enter the workforce in sectors that are booming. The future the nation’s students will inherit is one in which experience with AI tools and Big Data will be expected to be competitive in the workforce.
Whether students aren’t getting the content because it isn’t given its due priority, or because teachers aren’t comfortable teaching the content, AI and Big Data are here, and our educators don’t have the tools to help students get ready for a world in the midst of a data revolution. Veteran educators and preservice education programs alike may not have an understanding of the essential concepts in statistics, data literacy, or data science that allow them to feel comfortable teaching about and using AI tools in their classes. Additionally, many of the standard assessment and practice tools are not fit for use any longer in a world where every student can generate an A-quality paper in three seconds with proper prompting. The rise of AI-generated content has created a new frontier in information literacy; students need to know to question the output of publically available LLM-based tools, such as Chat-GPT, as well as to be more critical of what they see online, given the rise of AI-generated deep fakes, and educators need to understand how to either incorporate these tools into their classrooms or teach about them effectively. Whether educators are ready or not, the existing Digital Divide has the potential to widen, depending on whether or not they know how to help students understand how to use AI safely and effectively and have the access to resources and training to do so.
The United States finds itself at a crossroads in the global data boom. Demand in the economic marketplace, and threat to national security by way of artificial intelligence and mal-, mis-, and disinformation, have educators facing an urgent problem in need of an immediate solution. In August of 1958, 66 years ago, Congress passed the National Defense Education Act (NDEA), emphasizing teaching and learning in science and mathematics. Specifically in response to the launch of Sputnik, the law supplied massive funding to, “insure trained manpower of sufficient quality and quantity to meet the national defense needs of the United States.” The U.S. Department of Education, in partnership with the White House Office of Science and Technology Policy, must make bold moves now to create such a solution, as Congress did once before.
Plan of Action
In the years since the Space Race, one problem with STEM education persists: K-12 classrooms still teach students largely the same content; for example, the progression of high school mathematics including algebra, geometry, and trigonometry is largely unchanged. We are no longer in a race to space – we’re now needing to race against data. Data security, artificial intelligence, machine learning, and other mechanisms of our new information economy are all connected to national security, yet we do not have educators with the capacity to properly equip today’s students with the skills to combat current challenges on a global scale. Without a resource center to house the urgent professional development and classroom activities America’s educators are calling for, progress and leadership in spaces where AI and Big Data are being used will continue to dwindle, and our national security will continue to be at risk. It’s beyond time for a new take on the NDEA that emphasizes more modern topics in the teaching and learning of mathematics and science, by way of data science, data literacy, and artificial intelligence.
Previously, the Department of Education has created resource repositories to support the dissemination of information to the larger educational praxis and research community. One such example is the What Work Clearinghouse, a federally vetted library of resources on educational products and empirical research that can support the larger field. The WWC was created to help cut through the noise of many different educational product claims to ensure that only high-quality tools and research were being shared. A similar process is happening now with AI and Data Science Resources; there are a lot of resources online, but many of these are of dubious quality or are even spreading erroneous information.
To combat this, we suggest the creation of something similar to the WWC, with a focus on vetted materials for educator and student learning around AI and Data Science. We propose the creation of the Teacher Education Clearinghouse (TEC) underneath the Institute of Education Sciences, in partnership with the Office of Education Technology. Currently, WWC costs approximately $2,500,000 to run, so we anticipate a similar budget for the TEC website. The resource vetting process would begin with a Request for Information from the larger field that would encourage educators and administrators to submit high quality materials. These materials would be vetted using an evaluation framework that looks for high quality resources and materials.
For example, the RFI might request example materials or lesson goals for the following subjects:
- An Introduction to AI, Data Literacy, and Data Science
- Introduction to AI & Data Science Literacy & Vocabulary
- Foundational AI Principles
- Cross-Functional Data Literacy and Data Science
- LLMs and How to Use Them
- Critical Thinking and Safety Around AI Tools
- AI & Data Science Pedagogy
- AI and Data Science for Curriculum Development & Improvement
- Using AI Tools for Differentiation, Assessment & Feedback
- Data Science for Safe and Ethical AI Use
- Characteristics of Potentially Biased Algorithms and Their Shortcomings
A framework for evaluating how useful these contributions might be for the Teacher Education Clearinghouse would consider the following principles:
- Accuracy and relevance to subject matter
- Availability of existing resources vs. creation of new resources
- Ease of instructor use
- Likely classroom efficacy
- Safety, responsible use, and fairness of proposed tool/application/lesson
Additionally, this would also include a series of quick start guide books that would be broken down by topic and include a set of resources around foundational topics such as, “Introduction to AI” and “Foundational Data Science Vocabulary”.
When complete, this process would result in a national resource library, which would house a free series of asynchronous professional learning opportunities and classroom materials, activities, and datasets. This work could be promoted through the larger DoE as well as through the Regional Educational Laboratory program and state level stakeholders. The professional learning would consist of prerecorded virtual trainings and related materials (ex: slide decks, videos, interactive components of lessons, etc.). The materials would include educator-facing materials to support their professional development in Big Data and AI alongside student-facing lessons on AI Literacy that teachers could use to support their students. All materials would be publicly available for download on an ED-owned website. This will allow educators from any district, and any level of experience, to access materials that will improve their understanding and pedagogy. This especially benefits educators from less resourced environments because they can still access the training they need to adequately support their students, regardless of local capacity for potentially expensive training and resource acquisition. Now is the time to create such a resource center because there currently isn’t a set of vetted and reliable resources that are available and accessible to the larger educator community and teachers desperately need these resources to support themselves and their students in using these tools thoughtfully and safely. The successful development of this resource center would result in increased educator understanding of AI and data science such that the standing of U.S. students increases on such international measurements as the International Computer and Information Literacy Study (ICILS), as well as increased participation in STEAM fields that rely on these skills.
Conclusion
The field of education is at a turning point; the rise of advancements in AI and Big Data necessitate increased focus on these areas in the K-12 classroom; however, most educators do not have the preparation needed to adequately teach these topics to fully prepare their students. For the United States to continue to be a competitive global power in technology and innovation, we need a workforce that understands how to use, apply, and develop new innovations using AI and Data Science. This proposal for a library of high quality, open-source, vetted materials would support democratization of professional development for all educators and their students.
Modernizing AI Analysis in Education Contexts
The 2022 release of ChatGPT and subsequent foundation models sparked a generative AI (GenAI) explosion in American society, driving rapid adoption of AI-powered tools in schools, colleges, and universities nationwide. Education technology was one of the first applications used to develop and test ChatGPT in a real-world context. A recent national survey indicated that nearly 50% of teachers, students, and parents use GenAI Chatbots in school, and over 66% of parents and teachers believe that GenAI Chatbots can help students learn more and faster. While this innovation is exciting and holds tremendous promise to personalize education, educators, families, and researchers are concerned that AI-powered solutions may not be equally useful, accurate, and effective for all students, in particular students from minoritized populations. It is possible that as this technology further develops that bias will be addressed; however, to ensure that students are not harmed as these tools become more widespread it is critical for the Department of Education to provide guidance for education decision-makers to evaluate AI solutions during procurement, to support EdTech developers to detect and mitigate bias in their applications, and to develop new fairness methods to ensure that these solutions serve the students with the most to gain from our educational systems. Creating this guidance will require leadership from the Department of Education to declare this issue as a priority and to resource an independent organization with the expertise needed to deliver these services.
Challenge and Opportunity
Known Bias and Potential Harm
There are many examples of the use of AI-based systems introducing more bias into an already-biased system. One example with widely varying results for different student groups is the use of GenAI tools to detect AI-generated text as a form of plagiarism. Liang et. al found that several GPT-based plagiarism checkers frequently identified the writing of students for whom English is not their first language as AI-generated, even though their work was written before ChatGPT was available. The same errors did not occur with text generated by native English speakers. However, in a publication by Jiang (2024), no bias against non-native English speakers was encountered in the detection of plagiarism between human-authored essays and ChatGPT-generated essays written in response to analytical writing prompts from the GRE, which is an example of how thoughtful AI tool design and representative sampling in the training set can achieve fairer outcomes and mitigate bias.
Beyond bias, researchers have raised additional concerns about the overall efficacy of these tools for all students; however, more understanding around different results for subpopulations and potential instances of bias(es) is a critical aspect of deciding whether or not these tools should be used by teachers in classrooms. For AI-based tools to be usable in high-stakes educational contexts such as testing, detecting and mitigating bias is critical, particularly when the consequences of being incorrect are so high, such as for students from minoritized populations who may not have the resources to recover from an error (e.g., failing a course, being prevented from graduating school).
Another example of algorithmic bias before the widespread emergence of GenAI which illustrates potential harms is found in the Wisconsin Dropout Early Warning System. This AI-based tool was designed to flag students who may be at risk of dropping out of school; however, an analysis of the outcomes of these predictions found that the system disproportionately flagged African American and Hispanic students as being likely to drop out of school when most of these students were not at risk of dropping out). When teachers learn that one of their students is at risk, this may change how they approach that student, which can cause further negative treatment and consequences for that student, creating a self-fulfilling prophecy and not providing that student with the education opportunities and confidence that they deserve. These examples are only two of many consequences of using systems that have underlying bias and demonstrate the criticality of conducting fairness analysis before these systems are used with actual students.
Existing Guidance on Fair AI & Standards for Education Technology Applications
Guidance for Education Technology Applications
Given the harms that algorithmic bias can cause in educational settings, there is an opportunity to provide national guidelines and best practices that help educators avoid these harms. The Department of Education is already responsible for protecting student privacy and provides guidelines via the Every Student Succeeds Act (ESSA) Evidence Levels to evaluate the quality of EdTech solution evidence. The Office of Educational Technology, through support of a private non-profit organization (Digital Promise) has developed guidance documents for teachers and administrators, and another for education technology developers (U.S. Department of Education, 2023, 2024). In particular, “Designing for Education with Artificial Intelligence” includes guidance for EdTech developers including an entire section called “Advancing Equity and Protecting Civil Rights” that describes algorithmic bias and suggests that, “Developers should proactively and continuously test AI products or services in education to mitigate the risk of algorithmic discrimination.” (p 28). While this is a good overall guideline, the document critically is not sufficient to help developers conduct these tests.
Similarly, the National Institute of Standards and Technology has released a publication on identifying and managing bias in AI . While this publication highlights some areas of the development process and several fairness metrics, it does not provide specific guidelines to use these fairness metrics, nor is it exhaustive. Finally demonstrating the interest of industry partners, the EDSAFE AI Alliance, a philanthropically-funded alliance representing a diverse group of companies in educational technology, has also created guidance in the form of the 2024 SAFE (Safety, Accountability, Fairness, and Efficacy) Framework. Within the Fairness section of the framework, the authors highlight the importance of using fair training data, monitoring for bias, and ensuring accessibility of any AI-based tool. But again, this framework does not provide specific actions that education administrators, teachers, or EdTech developers can take to ensure these tools are fair and are not biased against specific populations. The risk to these populations and existing efforts demonstrate the need for further work to develop new approaches that can be used in the field.
Fairness in Education Measurement
As AI is becoming increasingly used in education, the field of educational measurement has begun creating a set of analytic approaches for finding examples of algorithmic bias, many of which are based on existing approaches to uncovering bias in educational testing. One common tool is called Differential Item Functioning (DIF), which checks that test questions are fair for all students regardless of their background. For example, it ensures that native English speakers and students learning English have an equal chance to succeed on a question if they have the same level of knowledge . When differences are found, this indicates that a student’s performance on that question is not based on their knowledge of the content.
While DIF checks have been used for several decades as a best practice in standardized testing, a comparable process in the use of AI for assessment purposes does not yet exist. There also is little historical precedent indicating that for-profit educational companies will self-govern and self-regulate without a larger set of guidelines and expectations from a governing body, such as the federal government.
We are at a critical juncture as school districts begin adopting AI tools with minimal guidance or guardrails, and all signs point to an increase of AI in education. The US Department of Education has an opportunity to take a proactive approach to ensuring AI fairness through strategic programs of support for school leadership, developers in educational technology, and experts in the field. It is important for the larger federal government to support all educational stakeholders under a common vision for AI fairness while the field is still at the relative beginning of being adopted for educational use.
Plan of Action
To address this situation, the Department of Education’s Office of the Chief Data Officer should lead development of a national resource that provides direct technical assistance to school leadership, supports software developers and vendors of AI tools in creating quality tech, and invests resources to create solutions that can be used by both school leaders and application developers. This office is already responsible for data management and asset policies, and provides resources on grants and artificial intelligence for the field. The implementation of these resources would likely be carried out via grants to external actors with sufficient technical expertise, given the rapid pace of innovation in the private and academic research sectors. Leading the effort from this office ensures that these advances are answering the most important questions and can integrate them into policy standards and requirements for education solutions. Congress should allocate additional funding to the Department of Education to support the development of a technical assistance program for school districts, establish new grants for fairness evaluation tools that span the full development lifecycle, and pursue an R&D agenda for AI fairness in education. While it is hard to provide an exact estimate, similar existing programs currently cost the Department of Education between $4 and $30 million a year.
Action 1. The Department of Education Should Provide Independent Support for School Leadership Through a Fair AI Technical Assistance Center (FAIR-AI-TAC)
School administrators are hearing about the promise and concerns of AI solutions in the popular press, from parents, and from students. They are also being bombarded by education technology providers with new applications of AI within existing tools and through new solutions.
These busy school leaders do not have time to learn the details of AI and bias analysis, nor do they have the technical background required to conduct deep technical evaluations of fairness within AI applications. Leaders are forced to either reject these innovations or implement them and expose their students to significant potential risk with the promise of improved learning. This is not an acceptable status quo.
To address these issues, the Department of Education should create an AI Technical Assistance Center (the Center) that is tasked with providing direct guidance to state and local education leaders who want to incorporate AI tools fairly and effectively. The Center should be staffed by a team of professionals with expertise in data science, data safety, ethics, education, and AI system evaluation. Additionally, the Center should operate independently of AI tool vendors to maintain objectivity.
There is precedent for this type of technical support. The U.S. Department of Education’s Privacy Technical Assistance Center (PTAC) provides guidance related to data privacy and security procedures and processes to meet FERPA guidelines; they operate a help desk via phone or email, develop training materials for broad use, and provide targeted training and technical assistance for leaders. A similar kind of center could be stood up to support leaders in education who need support evaluating proposed policy or procurement decisions.
This Center should provide a structured consulting service offering a variety of levels of expertise based on the individual stakeholder’s needs and the variety of levels of potential impact of the system/tool being evaluated on learners; this should include everything from basic levels of AI literacy to active support in choosing technological solutions for educational purposes. The Center should partner with external organizations to develop a certification system for high-quality AI educational tools that have passed a series of fairness checks. Creating a fairness certification (operationalized by third party evaluators) would make it much easier for school leaders to recognize and adopt fair AI solutions that meet student needs.
Action 2. The Department of Education Should Provide Expert Services, Data, and Grants for EdTech Developers
There are many educational technology developers with AI-powered innovations. Even when well-intentioned, some of these tools do not achieve their desired impacts or may be unintentionally unsafe due to a lack of processes and tests for fairness and safety.
Educational Technology developers generally operate under significant constraints when incorporating AI models into their tools and applications. Student data is often highly detailed and deeply personal, potentially containing financial, disability, and educational status information that is currently protected by FERPA, which makes it unavailable for use in AI model training or testing.
Developers need safe, legal, and quality datasets that they can use for testing for bias, as well as appropriate bias evaluation tools. There are several promising examples of these types of applications and new approaches to data security, such as the recently awarded NSF SafeInsights project, which allows analysis without disclosing the underlying data. In addition, philanthropically-funded organizations such as the Allen Institute for AI have released LLM evaluation tools that could be adapted and provided to Education Technology developers for testing. A vetted set of evaluation tools, along with more detailed technical resources and instructions for how to use them would encourage developers to incorporate bias evaluations early and often. Currently, there are very few market incentives or existing requirements that push developers to invest the necessary time or resources into this type of fairness analysis. Thus, the government has a key role to play here.
The Department of Education should also fund a new grant program that tasks grantees with developing a robust and independently validated third-party evaluation system that checks for fairness violations and biases throughout the model development process from pre-processing of data, to the actual AI use, to testing after AI results are created. This approach would support developers in ensuring that the tools they are publishing meet an agreed-upon minimum threshold for safe and fair use and could provide additional justification for the adoption of AI tools by school administrators.
Action 3. The Department of Education Should Develop Better Fairness R&D Tools with Researchers
There is still no consensus on best practices for how to ensure that AI tools are fair. As AI capabilities evolve, the field needs an ongoing vetted set of analyses and approaches that will ensure that any tools being used in an educational context are safe and fair for use with no unintended consequences.
The Department of Education should lead the creation of a a working group or task force comprised of subject matter experts from education, educational technology, educational measurement, and the larger AI field to identify the state of the art in existing fairness approaches for education technology and assessment applications, with a focus on modernized conceptions of identity. This proposed task force would be an inter-organizational group that would include representatives from several different federal government offices, such as the Office of Educational Technology and the Chief Data Office as well as prominent experts from industry and academia. An initial convening could be conducted alongside leading national conferences that already attract thousands of attendees conducting cutting-edge education research (such as the American Education Research Association and National Council for Measurement in Education).
The working group’s mandate should include creating a set of recommendations for federal funding to advance research on evaluating AI educational tools for fairness and efficacy. This research agenda would likely span multiple agencies including NIST, the Institute of Education Sciences of the U.S. Department of Education, and the National Science Foundation. There are existing models for funding early stage research and development with applied approaches, including the IES “Accelerate, Transform, Scale” programs that integrate learning sciences theory with efforts to scale theories through applied education technology program and Generative AI research centers that have the existing infrastructure and mandates to conduct this type of applied research.
Additionally, the working group should recommend the selection of a specialized group of researchers who would contribute ongoing research into new empirically-based approaches to AI fairness that would continue to be used by the larger field. This innovative work might look like developing new datasets that deliberately look for instances of bias and stereotypes, such as the CrowS-Pairs dataset. It may build on current cutting edge research into the specific contributions of variables and elements of LLM models that directly contribute to biased AI scores, such as the work being done by the AI company Anthropic. It may compare different foundation LLMs and demonstrate specific areas of bias within their output. It may also look like a collaborative effort between organizations, such as the development of the RSM-Tool, which looks for biased scoring. Finally, it may be an improved auditing tool for any portion of the model development pipeline. In general, the field does not yet have a set of universally agreed upon actionable tools and approaches that can be used across contexts and applications; this research team would help create these for the field.
Finally, the working group should recommend policies and standards that would incentivize vendors and developers working on AI education tools to adopt fairness evaluations and share their results.
Conclusion
As AI-based tools continue being used for educational purposes, there is an urgent need to develop new approaches to evaluating these solutions to fairness that include modern conceptions of student belonging and identity. This effort should be led by the Department of Education, through the Office of the Chief Data Officer, given the technical nature of the services and the relationship with sensitive data sources. While the Chief Data Officer should provide direction and leadership for the project, partnering with external organizations through federal grant processes would provide necessary capacity boosts to fulfill the mandate described in this memo.As we move into an age of widespread AI adoption, AI tools for education will be increasingly used in classrooms and in homes. Thus, it is imperative that robust fairness approaches are deployed before a new tool is used in order to protect our students, and also to protect the developers and administrators from potential litigation, loss of reputation, and other negative outcomes.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
PLEASE NOTE (February 2025): Since publication several government websites have been taken offline. We apologize for any broken links to once accessible public data.
When AI is used to grade student work, fairness is evaluated by comparing the scores assigned by AI to those assigned by human graders across different demographic groups. This is often done using statistical metrics, such as the standardized mean difference (SMD), to detect any additional bias introduced by the AI. A common benchmark for SMD is 0.15, which suggests the presence of potential machine bias compared to human scores. However, there is a need for more guidance on how to address cases where SMD values exceed this threshold.
In addition to SMD, other metrics like exact agreement, exact + adjacent agreement, correlation, and Quadratic Weighted Kappa are often used to assess the consistency and alignment between human and AI-generated scores. While these methods provide valuable insights, further research is needed to ensure these metrics are robust, resistant to manipulation, and appropriately tailored to specific use cases, data types, and varying levels of importance.
Existing approaches to demographic post hoc analysis of fairness assume that there are two discrete populations that can be compared, for example students from African-American families vs. those not from African-American families, students from an English language learner family background vs. those that are not, and other known family characteristics. However in practice, people do not experience these discrete identities. Since at least the 1980s, contemporary sociological theories have emphasized that a person’s identity is contextual, hybrid, and fluid/changing. One current approach to identity that integrates concerns of equity that has been applied to AI is “intersectional identity” theory . This approach has begun to develop promising new methods that bring contemporary approaches to identity into evaluating fairness of AI using automated methods. Measuring all interactions between variables results in too small a sample; these interactions can be prioritized using theory or design principles or more advanced statistical techniques (e.g., dimensional data reduction techniques).