Education & Workforce
day one project

AI Implementation is Essential Education Infrastructure

10.22.25 | 9 min read | Text by Dominique Dallas

State education agencies (SEAs) are poised to deploy federal funding for artificial intelligence tools in K–12 schools. Yet, the nation risks repeating familiar implementation failures that have limited educational technology for more than a decade. The July 2025 Dear Colleague Letter from the U.S. Department of Education (ED) establishes a clear foundation for responsible artificial intelligence (AI) use, and the next step is ensuring these investments translate into measurable learning gains. The challenge is not defining innovation—it is implementing it effectively. To strengthen federal–state alignment, upcoming AI initiatives should include three practical measures: readiness assessments before fund distribution, outcomes-based contracting tied to student progress, and tiered implementation support reflecting district capacity. Embedding these standards within federal guidance—while allowing states bounded flexibility to adapt—will protect taxpayer investments, support educator success, and ensure AI tools deliver meaningful, scalable impact for all students.

Challenge and Opportunity

For more than a decade, education technology investments have failed to deliver meaningful results—not because of technological limitations, but because of poor implementation. Despite billions of dollars in federal and local spending on devices, software, and networks, student outcomes have shown only minimal improvement. In 2020 alone, K–12 districts spent over $35 billion on hardware, software, curriculum resources, and connectivity—a 25 percent increase from 2019, driven largely by pandemic-related remote learning needs. While these emergency investments were critical to maintaining access, they also set the stage for continued growth in educational technology spending in subsequent years. 

Districts that invest in professional development, technical assistance, and thoughtful integration planning consistently see stronger results, while those that approach technology as a one-time purchase do not. As the University of Washington notes, “strategic implementation can often be the difference between programs that fail and programs that create sustainable change.” Yet despite billions spent on educational technology over the past decade, student outcomes have remained largely unchanged—a reflection of systems investing in tools without building the capacity to understand their value, integrate them effectively, and use them to enhance learning. The result is telling: an estimated 65 percent of education software licenses go unused, and as Sarah Johnson pointed out in an EdWeek article, “edtech products are used by 5% of students at the dosage required to get an impact”.

Evaluation practices compound the problem. Too often, federal agencies measure adoption rates instead of student learning, leaving educators confused and taxpayers with little evidence of impact. As the CEO of the EdTech Evidence Exchange put it, poorly implemented programs “waste teacher time and energy and rob students of learning opportunities.” By tracking usage without outcomes, we perpetuate cycles of ineffective adoption, where the same mistakes resurface with each new wave of innovation.

Implementation Capacity is Foundational

A clear solution entails making implementation capacity the foundation of federal AI education funding initiatives. Other countries show the power of this approach. Singapore, Estonia, and Finland all require systematic teacher preparation, infrastructure equity, and outcome tracking before deploying new technologies, recognizing, as a Swedish edtech implementation study found, that access is necessary but not sufficient to achieve sustained use. These nations treat implementation preparation as essential infrastructure, not an optional add-on, and as a result, they achieve far better outcomes than market-driven, fragmented adoption models.

The United States can do the same. With only half of states currently offering AI literacy guidance, federal leadership can set guardrails while leaving states free to tailor solutions locally. Implementation-first policies would allow federal agencies to automate much of program evaluation by linking implementation data with existing student outcome measures, reducing administration burden and ensuring taxpayer investments translate into sustained learning improvements. 

The benefits would be transformational:

In short, implementation is not a secondary concern; it is the primary determinant of whether AI in education strengthens learning or repeats the costly failures of past ed-tech investments. Embedding implementation capacity reviews before large-scale rollout—focused on educator preparation, infrastructure adequacy, and support systems—would help districts identify strengths and gaps early. Paired with outcomes-based vendor contracts and tiered implementation support that reflects district capacity, this approach would protect taxpayer dollars while positioning the United States as a global leader in responsible AI integration.      

Plan of Action

AI education funding must shift to being both tool-focused and outcome-focused, reducing repeated implementation failures and ensuring that states and districts can successfully integrate AI tools in ways that strengthen teaching and learning. Federal guidance has made progress in identifying priority use cases for AI in education. With stronger alignment to state and local implementation capacity, investments can mitigate cycles of underutilized tools and wasted resources.

A hybrid approach is needed: federal agencies set clear expectations and provide resources for implementation, while states adapt and execute strategies tailored to local contexts. This model allows for consistency and accountability at the national level, while respecting state leadership.

Recommendation 1. Establish AI Education Implementation Standards Through Federal–State Partnership

To safeguard public investments and accelerate effective adoption, the Department of Education, working in partnership with state education agencies, should establish clear implementation standards that ensure readiness, capacity, and measurable outcomes. 

Recommendation 2. Develop a National AI Education Implementation Infrastructure

The U.S. Department of Education, in coordination with state agencies, should encourage a national infrastructure that helps and empowers states to build capacity, share promising practices, and align with national economic priorities.

Recommendation 3. Adopt Outcomes Based Contracting Standards for AI Education Procurement

The U.S. Department of Education should establish outcomes based contracting (OBC) as a preferred procurement model for federally supported AI education initiatives. This approach ties vendor payment directly to demonstrated student success, with at least 40% of contract value contingent on achieving agreed-upon outcomes, ensuring federal investments deliver measurable results rather than unused tools.

Recommendation 4. Pilot Before Scaling

To ensure responsible, scalable, and effective integration of AI in education, ED and SEAs should prioritize pilot testing before statewide adoption while building enabling conditions for long-term success.

Recommendation 5. Build a National AI Education Research & Development Network

To promote evidence-based practice, federal and state agencies should co-develop a coordinated research and development infrastructure that connects implementation data, policy learning to practice, and global collaboration.

Conclusion

The Department’s guidance on AI in education marks a pivotal step toward modernizing teaching and learning nationwide. To realize the promise of AI in education, funding should support both the acquisition of tools and the strategies that ensure their effective implementation. To realize its promise, we must shift from funding tools to funding effective implementation. Too often, technologies are purchased only to sit on the shelf while educators lack the support to integrate them meaningfully. International evidence shows that countries investing in teacher preparation and infrastructure before technology deployment achieve better outcomes and sustain them.

Early research also suggests that investments in professional development, infrastructure, and systems integration substantially increase the long-term impact of educational technology. Prioritizing these supports reduces waste and ensures federal dollars deliver measurable learning gains rather than unused tools. The choice before us is clear: continue the costly cycle of underused technologies or build the nation’s first sustainable model for AI in education—one that makes every dollar count, empowers educators, and delivers transformational improvements in student outcomes.

Frequently Asked Questions
Won’t implementation guidelines slow innovation and create more bureaucracy?

Clear implementation expectations don’t slow innovation—they make it sustainable. When systems know what effective implementation looks like, they can scale faster, reduce trial-and-error costs, and focus resources on what works to ultimately improve student outcomes.

Will these guidelines disadvantage high-need districts that lack infrastructure?

Quite the opposite. Implementation support is designed to build capacity where it’s needed most. Embedding training, planning, and technical assistance ensures every district, regardless of size or resources, can participate in innovation on an equal footing.

How do we ensure educators and school leaders actually use AI tools effectively?

AI education begins with people, not products. Implementation guidelines should help educators improve their existing skills to incorporate AI tools into instruction, offer access to relevant professional learning, and receive leadership support, so that AI enhances teaching and learning.

How will implementation quality be measured across different states and districts?

Implementation quality is multi-dimensional and may look different depending on local context. Common indicators could include: educator readiness and training, technical infrastructure, use of professional learning networks, integration of AI tools into instruction, and adherence to data governance protocols. While these metrics provide guidance, they are not exhaustive, and ED and SEAs will iteratively refine measures as research and best practices evolve. Transparent reporting on these indicators will help identify effective approaches, support continuous improvement, and build public trust.

Isn’t comprehensive implementation support too expensive?

Not when you look at the return. Billions are spent on tools that go underused or abandoned within a year. Investing in implementation is how we protect those investments and get measurable results for students.

What if states or districts resist these guidelines?

The goal isn’t to add red tape—it’s to create alignment. States can tailor standards to local priorities while still ensuring transparency and accountability. Early adopters can model success, helping others learn and adapt.