A National Training Program for AI-Ready Students

In crafting future legislation on artificial intelligence (AI), Congress should introduce a Digital Frontier and AI Readiness Act of 2025 to create educator training sites in emerging technology to ensure our students can graduate AI-ready. Computing, data, and AI basics will be critical for every student, yet our education system does not have the capacity to impart them. A national mobilization for the education workforce would ensure U.S. leadership in the global AI talent race, address mounting challenges in teacher shortages and retention, and fill critical workforce preparedness gaps not addressed by the CHIPS and Science Act. The legislation would include three components: (1) a prestigious national fellowship program for classroom educators with extended summer pay; (2) an evidence-based national network of training sites for peer-based learning; and (3) a modernization competition for teacher college programs to sustain long-term improvement in our education workforce. 

Investing in effective educators has a significant impact: one high-quality teacher can significantly boost lifetime incomes, degree attainment, and other life satisfaction measures for many classrooms of students. These programs would be facilitated through the National Science Foundation (NSF), including through simplified application procedures, expanded eligibility, and evaluation approaches.

Challenge and Opportunity 

If AI is positioned to dramatically transform our economy, from the production line to the c-suite, then everyone must be prepared to leverage its power. AI alone may add between an estimated $2.6 trillion to $4.4 trillion annually to the global economy and may automate between 60% to 70% of task-time within existing jobs, rather than full replacement. Earlier studies estimated that emerging technologies will increase the technology intensity of existing careers across all sectors. A report by the Burning Glass Institute found that 22% of all current open jobs in the U.S. economy include at least one “data science skill,” with the highest share of data-skill job postings in utilities, manufacturing, and agriculture. Not every worker will build the next AI algorithm or become a data scientist, but nearly every American will need to leverage data and AI to maintain a competitive edge in their sector or risk losing entire industries to other countries who do the same. This unprecedented economic growth will only be captured by the countries whose workers are prepared in data and AI basics. 

U.S. educators are mostly unsupported to teach students about AI and other emerging technologies. An analysis of math educators nationally found that teachers are least confident to teach about data and statistics, as well as technology integration, compared to other content categories. Computer science was the least popular credential for K-12 educators to pursue as recently as the 2018–2019 school year. These challenges translate to student opportunities and outcomes. As of 2023, only 5.8% of our high school  students are enrolled in foundational computer science courses. Introductory basics in data or AI are typically not covered even if they exist in some state standards. Nationally, students’ foundational data literacy has declined between one and three grade levels steadily over the past decade, varying disproportionately by race and geography, with losses only accelerated by the pandemic.

Moreover, our teacher workforce capacity is declining. Teacher entry, preparation, and retention rates remain at historical lows across the country and have not meaningfully recovered since the pandemic. Over the past decade, the number of individuals completing a teacher preparation program has fallen 25%, with only modest recovery since the pandemic, shortages of at least 55,000 unfilled positions this year, and long-term forecasts reaching at least 100,000 shortages annually. Factors including low pay, low prestige, and difficult environments create a perception challenge for the profession: less than 1 in 5 Americans would encourage a young person to become a teacher. These challenges compound over time, as more graduate schools of education close or cut their programming. In 2022, Harvard discontinued its Undergraduate Teacher Program completely, citing low interest and enrollment numbers, one among many.

What if the concurrent challenges of digital upskilling and teacher shortages could help solve one another? The teaching profession is facing a perception problem just as AI has made education more important than ever before. In the global information age, U.S. worker skills and talent are our greatest weapons. The expectations of teachers and teaching must change. Major U.S. economic peers, including Canada, Germany, China, India, New Zealand, and the United Kingdom, have all announced similar national efforts to make robust investments in teacher upskilling in high-value technology areas. In our new AI era, U.S. policymakers now have the opportunity to develop the infrastructure, 21st-century training, and prestigious social recognition to properly value education as an economic and national security priority. A recent report from Goldman Sachs identified “a narrow window of opportunity – what we call the inter-AI years,” in which policymaker “decisions made today will determine what is possible in the future. A generative world order will emerge.” Inaction today risks the United States falling quickly behind tomorrow.

Teacher preparation program enrollment by program and year, 2010–2018 via CAP, 2019

Plan of Action 

A Digital Frontier Teaching Corps (DFT Corps) would mobilize a new generation of teachers who are fluent in, adaptive to, and resilient to fast-changing technology, equipped to help our students become the same. The DFT Corps would re-norm the job of teaching to become a full-year profession, making the summer months an essential part of the job of adaptive 21st-century teaching with regular training intensives. Currently, educators only work and are paid for nine months of the year. 

Upon acceptance by application, selected teachers would enter a three-year fellowship program to participate in training intensives facilitated at local institutes of higher education, nonprofits, educational service agencies, or industry partners. Scholarships facilitated through the National Science Foundation would extend educator pay and hours from nine months to a full annualized salary. DFT Corps members would also be eligible for substantial federal loan forgiveness in return for their additional time investment. 

After three rotations, members would become eligible to serve as DFT Corps site leaders, responsible for program design at new or existing training sites. These opportunities would lend greater compensation, prestige, and retention through leadership opportunities, concurrently addressing systemic talent challenges in education at their root and creating an adaptive mechanism for faster upskilling. Additional program components, including licensure incentives and teacher college innovation grants, would further sustain long-term impacts. By year three of the program, 50,000 educators would be on the path to preparing our students for the future of work, 500 inaugural Corps members would become state or local site leaders to expand the mobilization, and the perception of teaching would further shift from childcare to a critical and respected national service. 

To accomplish this vision, Congress should authorize the National Science Foundation to create: 

1. A national Digital Frontier Teaching Corps, a three-year “talent surge” fellowship opportunity covering summertime pay for high-potential educators to conduct intensive study in AI, data science, and computing foundations. The DFT Corps would be a prestigious and materially meaningful program to both impart digital technical skills and transform the social perception of the teaching profession. The DFT Corps would include:

2. DFT Corps training sites, a national network of university-based, locally led professional development sites in collaboration with local education agencies, based on the evidence-based model of the National Writing Project. Competitive five-year grants would support the creation of Corps sites, one per state, with the opportunity for renewal. DFT Corps training sites would:

3. Teacher College Innovation Grants, a competitive NSF grant program for modernizing teacher preparation programs and teacher licensure models. Teacher College Innovation Grants would provide research funding and capacity to evaluate DFT Corps training sites and ensure lessons learned are quickly integrated back into teacher preparation programs. Competitive priorities would be made for:

YearNumber of teachers in-training via DFTNumber of Corps sitesNumber of teacher site leaders
15005 states
2100010 states
3200020 states
4350035 states35
5500050 states50
Sum12,0005050

The DFT Corps program is intended to be catalytic. Should the program find success in early scaling, state and local funding could support further adoption of the model over time, so that teaching transforms to an annualized profession across subject areas and grade-levels. 

Conclusion 

In the new era of AI, education is a national security issue. Advancing our population’s ability to effectively deploy AI and other emerging technology will uniquely determine U.S. leadership and economic competitiveness in the coming years and decades. Education investments made by states within the next few years will all but determine local long-term economic trajectories. 

In the 1950s and 1960s, education and competitiveness were one and the same. One year after the Soviets launched Sputnik, Congress took action and passed the National Defense Education Act, a $1 billion spending package to advance teaching and learning in science, mathematics, and foreign languages. At one time, we respected teachers as critical to the national mission, leading the charge to prepare our next generation to lead, and we took swift action to support their mission. We must take the same bold action now.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why is federal legislation needed to enact this program?

The scale of this national challenge requires meaningful appropriations to raise teacher pay, ensure high-quality training opportunities with sufficient expertise, and sustain a long-term strategy to address deeply-rooted sector challenges. A short-term, one-shot approach will simply waste money and generate minimal impact.


Moreover, the program’s creation necessitates a significant simplification of National Science Foundation application processes to reduce grant application length, burden, and paperwork. It also creates a targeted exception for the NSF to support broader nonresearch activities that are otherwise sector-critical for national scientific and educational endeavors. If enacted, this legislation could help reduce overhead for program administration and redirect more resources to supporting quality state and local implementation vs. program compliance.

How much will this program cost?

Once scaled to all 50 states, the recurring annual costs of the proposed legislation would be $250 million:



  • $150 million for DFT Corps member scholarships (5,000 teachers per year)

  • $50 million for DFT Corps training sites (one site per state at $1 million each)

  • $50 million for Teacher College Innovation Grants (one site per state at $1 million each)

In the first five years, the cost would slowly increase to the total amount, starting at a base $25 million for five states ($15 million for 500 scholarships, $5 million for training sites, $5 million for Innovation grants).


What return on investment should the federal government expect from this program?

Creating an AI-ready workforce is a critical national priority to maintain U.S. economic competitiveness, mitigate risk of AI primacy, and ensure our citizenry can successfully navigate a complex technology landscape they will graduate into. McKinsey projects that successful integration of AI across more than 63 business use cases would add between $13.6 trillion and $22.1 trillion to the global economy. A recent National Institutes of Health analysis suggests that, for any country to successfully specialize in AI, there must be general preexisting technological capabilities and a strong scientific knowledge base. AI-readiness must be a population-wide goal. Given that 60% of Americans do not complete a bachelor’s degree, AI readiness must begin early in K-12 education and in community colleges.


Estimated return on investment: $250 million represents less than 1.5% of the Every Student Succeeds Act’s last annual appropriation level in 2020, the nation’s primary national education funding mechanism. If this legislation increases the share of economic growth forecasted by effectively harnessing AI by only five percent, we would conservatively add $171 billion to the U.S. economy each year.

Why is three years the right amount of time for the DFT Corps?

The majority of educator training programs are too short, only given during the busy school year, and do not have the opportunity to improve over multiple years within a given school. Early iterations of the National Writing Project, on which this program is based, determined that “although schools may see results from C3WP in a single school year, a longer-term investment may produce a greater impact.” Even if sustained during a school year, researchers have found that “absent a surrounding context that is highly supportive of teacher learning and change, 1 year of PD cannot sufficiently alter instructional practices enough to impact student outcomes.” While earlier evaluation studies saw no impact on student achievement, the National Writing Project is now one of the most lauded and effective educator training models trialed in the United States, made possible by a long-term and consistent investment in professional learning.


A three-year program will allow educators to advance from novice (year 1) to intermediate (year 2) to mentor or facilitator (year 3).By year 4, graduating educators would be prepared to serve as site leaders, dramatically increasing the available talent pool for sustaining and growing DFT Corps sites nationally. Additional time will also enable a local site to improve its own programming and align tightly with multi-year school and district planning.

How is the DFT Corps different from other federal education programs?

The DFT Corps is an accelerated investment in the creation of locally led professional development sites, uniquely designed with (1) direct support for current classroom educators to participate; (2) a replicated network model for summer-based, in-service training; and (3) innovation grants to research aligned training improvements and best practices. No current federal program does all three at once for current classroom educators.


Existing teacher training grant programs, such as the Teacher Quality Partnerships or Supporting Effective Educator Development carry strong evidence requirements or incompatible competitive preferences. Given AI is new and little research exists on effective teaching practices, these requirements serve to significantly limit proposals on emerging topics. Grants also vary widely by institution.


Existing educator scholarship programs, such as the Robert Noyce Teacher Scholarship Program, focus mostly on recruitment of new teachers, and only provide small support for existing teachers pursuing or having previously obtained a master’s degree. 40% of U.S. teachers do not have a master’s degree. A targeted national focus on AI readiness would also require several higher-education institutions across states to organically propose training programs to the Noyce program at the same time, with the same model.

How would the proposed legislation simplify application burden and enable the NSF to administer the program?

AI technology development is moving faster than the education sector can respond. In order to accelerate site creation, reduce application burden, and modernize grant distribution, the DFT Corps program would direct the NSF to:



  • Allow nonresearch activities to be funded under the program, including educator salary support

  • Remove and centralize all program evaluation requirements away from individual grantees, reallocating evaluation activities to external researchers across sites

  • Centrally manage disbursement of DFT salary supplements, potentially via tax credits

  • Modernize required data management plan requirements for present-day technology

  • Limit total grant application length to 10 pages or less. In other fields, NSF grant applications take investigators over 171 hours to prepare, despite little relation between time invested and actual funding outcomes in some cases. Another study found that 42% of investigators time is spent on administrative and reporting tasks to support the execution of an NSF grant.

Is there an executive action version of this proposal?

Yes, with appropriations. Under new 2023 guidance, the Robert Noyce Teacher Scholarship Program has expanded salary supplement options and enabled two-summer support. An executive action version of this proposal would expand the Robert Noyce Teacher Scholarship program via (1) increasing support for Track 3 with lower degree requirements (i.e. Bachelors instead of Masters); (2) stipulating a competitive priority for AI readiness and emerging technology education (defined as: computer science, computational thinking, data science, artificial intelligence literacy across the curriculum); and (3) direct the White House Office of Science & Technology Policy to launch a multi-agency, public-facing communications and recruitment effort for the DFT Corps program, in collaboration with the 50 largest teacher colleges and other participating Noyce program institutions.

What evidence exists for the proposed training model?

The proposed DFT Corps mirrors a long-running evidence-based model, the National Writing Project (NWP), which has trained over 95,000 teachers in high-quality writing instruction across 2,000 school districts since 1974. Three independent evaluation studies over multiple years across 20 states found “positive and statistically significant effects on student achievement” across all measured components of writing. The evidence base supporting NWP is “unusually robust” for education research, employing randomized-controlled trials and meeting ESSA Tier 1 evidence criteria. A recent replication study in 2023 focusing on rural schools found positive results on “on all attributes measured,” a similar priority for the proposed DFT Corps program.

With fast-changing technology, what will guarantee the quality and responsiveness of professional training?
DFT Corps sites would directly involve researchers in computer science, data science, artificial intelligence, or other technology-focused departments, in collaboration with schools of education, which otherwise rarely collaborate. DFT Corps programs would also include eligibility to fund industry advisors to aid design and updates to training curriculum. External evaluations from cross-state research teams would support content reviews and reduce administrative burden for otherwise duplicated in-house evaluation work.
What mechanisms will ensure retention for DFT Corps members beyond the three-year training period?

Similar to the Robert Noyce Scholarship program, the DFT Corps program would waive tuition costs and provide scholarship funds in exchange for a multi-year teaching commitment. Each year’s participation in the program would extend an educator’s teaching commitment by two additional years. A 2013 evaluation of the Noyce program found this model worked, with longer retention rates compared to new teachers graduating from the same institutions.

How will the DFT Corps address root causes of talent shortages in education?

Recodes a nine-month profession to annual pay, and annual expectations: A primary change advanced by the DFT Corps is converting the typical teacher job from a nine-month term to an annual salary, similar to lawyers, doctors, and other high-prestige professions. In a recent RAND report on why teachers wanted to leave the profession, salary was the #2 reason, hours worked outside the school day the #3, and total hours worked was the #4. Teachers are promised a flexible and part-year job on paper, when the reality is very different. Nine-month pay challenges are so extreme that several U.S. banks host articles on “surviving the summer paycheck gap.” Many teachers take second (non-academic) jobs. And the popular #NoSummersOff hashtag gained a significant following amongst educators pre-pandemic. Concurrently, the rate of technology and curriculum changes demand more professional learning time than is typically given by schools and districts. Summer professional learning is often optional and highly variable across states. Our expectations are far too low for one of our most critical knowledge jobs. DFT Corps members would be paid during the summer for intensive study to update curriculum, plan content, and incorporate new education research on how students learn. Full-time summer work would remove pressure for administrators to “squeeze in” short, one-day professional development sessions during the school year, which study after study has demonstrated are a waste of time and money. Many current classroom educators to the former U.S. Secretary of Education continue to question these existing PD approaches. 


Creates a leadership ladder: Leadership opportunities for classroom educators are few and far between. Teaching is often described as a “flat” profession, and nearly half of educators leaving the field point to a perceived lack of leadership or decision-making opportunities as contributing factors. Concurrently, new teachers who have the opportunity to collaborate with teacher-leaders within their own school generate stronger academic gains for their students. The DFT Corps would create state-wide leadership opportunities at Corps summer sites that do not disrupt school-year teaching, allowing educators to remain in the classroom during the other nine months of the year but still access visible leadership and mentor roles during the summer.


Leverages peer-based learning: Beyond the opportunity to positively impact students and student learning, 63% of educators report that strong relationships with other teachers are a top reason for staying in the classroom. The DFT Corps would leverage peer-based professional development over multiple years, reallocating the summer months to joint study and creating stronger educator networks statewide. One of the DFT Corp’s precedent peer-based models, the National Writing Project, “has a legacy as being the best professional development model for K-12 teachers” precisely due to a targeted focus on peer exchange. In post-training interviews, researchers found that educators “immediately changed several of their teaching practices and felt a renewed sense of enthusiasm towards the teaching of writing after participating in the NWP… a renewed sense of authority that quickly transferred to agency, these teachers possessed the self-efficacy to share what they knew and had learned with other teachers, administrators, district leaders, fellow graduate students, and most importantly, the students who would enter their classrooms in the fall.” 


Builds needed prestige for the profession: The DFT Corps program forwards a reinvigorated national prioritization of the education field. In the information economy, educators are one of our most critical professions, and a greater determinant of gross domestic product than any individual semiconductor or algorithm. Under a DFT Corps communications rollout, teaching would be separated from any prior stereotypes of “caretakers,” positioned instead as essential to the economic, technology, and security fabric that advance societal progress. Research consistently suggests that low prestige of the profession pushes high-achievers away from teaching, is closely correlated with both falling preparation and retention, and may even directly affect student achievement. In China, where educators have long enjoyed high prestige for their profession, researchers found that an expansion of the country’s Free Teacher Education program helped to increase application competitiveness, extend retention rates, and enhance self-identity for program participants in a pre-publication evaluation study. In a 2018 “Global Teacher Status Index,” China was the only country to score 100 while the United States scored under 40 points. The United States is falling behind in our education culture, and we have little time to make up for lost ground.

How does this proposal relate to the Cantwell-Moran NSF AI Education Act of 2024?
This proposal builds upon and suggests specifications for multiple sections of the NSF AI Education Act, introduced by Senators Cantwell and Moran, with additional detail and focus on the teacher workforce. Specifically, this proposal provides suggested priority areas, research goals, and expanded eligibility for K-12 education grants stipulated in Section 10 (“Award Program for Research on AI in Education”); stipulates an alternative mechanism, implementation plan, and authorization amount for Section 11 (“National Science Foundation National STEM Teacher Corps”), with critical directives to NSF to enable program administration and reduction of application burden; and modifies Section 8 (“NSF Outreach Campaign”) to include public mobilization of the educator workforce.

The long-term vision for this proposal also extends beyond the NSF AI Education Act and suggests a new mechanism for federal education support in the Every Student Succeeds Act.

National Security AI Entrepreneur Visa: Creating a New Pathway for Elite Dual-Use Technology Founders to Build in America

NVIDIA, Anthropic, OpenAI, HuggingFace, and scores of other American startups helping cement America’s leadership in the race for artificial intelligence (AI) dominance all have one thing in common: they have at least one immigrant co-founder. In fact, in 2023, the National Foundation for American Policy released a policy analysis on the role of immigrants in the top American AI companies. According to their research, 65% of the companies appearing on the Forbes AI 50 list were founded or co-founded by at least one immigrant. Immigrant entrepreneurs are critical to America’s economic success, and as the private sector takes an increasing role in developing critical dual-use technologies like AI, they will be critical to America’s defense. 

According to a Brookings Foundation report, “China sees talent as central to its technological advancement; President Xi Jinping has repeatedly called talent ‘the first resource’ in China’s push for ‘independent innovation.’” It’s easy to understand why the CCP sees talent as critical in its efforts to dominate key dual-use technologies relevant to national and economic security – in today’s knowledge economy, those who can innovate faster win. A company like SpaceX, which almost single-handedly reinvigorated America’s spacefaring economy, would likely not exist without Elon Musk. The lists of companies and dual-use technologies critical to American national and economic security that are unlikely to have been created successfully without the right personalities behind them are innumerable. America needs these entrepreneurs more than ever as competition with China for global leadership in key fields like AI heats up.

Given increased competition for talent – from allies like the United Kingdom to competitors and adversaries like China – in critical technology areas like AI, Congress must act to support high-skilled entrepreneurs by creating a National Security Startup Visa specifically targeted at founders of AI firms whose technology is inherently dual-use and critical for America’s economic leadership and national security. To maximize the potential economic benefits of such a visa for all Americans, it can be narrowly tailored, focusing only on entrepreneurs who (1) have raised significant capital from accredited American investors and venture capitalists (VCs), (2) are willing to physically reside and start their business in an Opportunity Zone, and (3) will hire at least five Americans within the first year of operation. Immigration may be a complex issue, but there is no doubt that immigrant founders are the not-so-secret ingredient that have helped to fuel America’s rise as a tech superpower. Developing a narrowly scoped visa targeted at a critical technology segment means that America can ensure its continued dominance in AI, a technology that the CEO of Google has said may be as profound as fire or electricity. 

Challenge and Opportunity

While the United States has long been the preferred destination for immigrant entrepreneurs, America has never had more competition for global talent. Countries like Canada, Germany, and Estonia have created visas to attract entrepreneurs, and they appear to be working. After the introduction of a Canadian startup visa in 2013, the program increased the likelihood of previously U.S.-based immigrants creating a startup in Canada by 69%. These are immigrants who were already in America to study or work, and it should have been an obvious choice for them to stay and build their company in the United States. This means that the United States is losing out to hundreds of new companies and likely thousands of high-paying jobs that would come along with them. The fact that Canada, thanks to a streamlined immigration process for founders, was able to attract so many who were already in the United States should serve as a serious warning as to how the competition for talent is heating up.

Canada demonstrates how a start-up visa enhances immigrant entrepreneurship via National Bureau of Economic Research

Historically, the United States—and Silicon Valley in particular—was the undisputed leader for venture capital fundraising and the place to start a potential unicorn (a company valued at over $1 billion). However, America’s dominance has shrunk, and VC dollars along with unicorns are increasingly found across the world in tech hotspots from China to India to the United Kingdom, showing it is increasingly easy for entrepreneurs to build a successful startup elsewhere. This is critical, because when America was the only place to build a leading company, entrepreneurs had little choice but to wade through the labyrinth that is the American immigration system. Now, top talent have many choices, and the United States must compete to become not just the premier destination to build a company and raise capital but one that is accessible to startup founders who can’t afford high-priced immigration lawyers or to wait for years until their visa is granted.

While America’s largest geopolitical competitor may suffer from extreme difficulties in attracting foreign entrepreneurs to its shores, China has a massive population advantage. This can be seen directly in the STEM space and AI in particular. According to a CSIS report, “By 2025, Chinese universities are projected to produce more than 77,000 STEM PhD graduates per year, more than double the 2010 level of about 34,000 STEM PhD graduates. In comparison, the United States is projected to graduate only approximately 40,000 STEM PhD students in 2025, a figure that includes over 16,000 international students.” 

China has already outpaced the United States in the number of AI-related research articles published, and its domestic tech champions are global leaders in AI-enabled technology like facial recognition. Given the strong domestic showing in AI from Chinese researchers and entrepreneurs, with local AI startups raising billions of dollars in 2023 despite a slowdown in VC funding in China, China presents a strategic threat to America’s leadership in the AI space. America is on the cusp of losing its leadership in AI to China, but this policy creates clear opportunities to expeditiously regain lost ground by bringing in AI entrepreneurs who have already raised venture funding and are able to immediately hire American workers. 

However daunting the challenge China presents, America has long had a superpower: attracting the best and brightest to our shores to build innovative global businesses. And while many leading American AI startups have an immigrant co-founder, for every entrepreneur coming to the United States today, many more are turned away or dissuaded from applying. Take Erdal Arikan, a Turkish MIT and CalTech graduate who had difficulty staying in America to continue his research and returned to Turkey. According to Graham Allison and Eric Schmidt, “It turned out that Arikan’s insight was the breakthrough needed to leap from 4G telecommunications networks to much faster 5G mobile internet services. Four years later, China’s national telecommunications champion, Huawei, was using Arikan’s discovery to invent some of the first 5G technologies. Today, Huawei holds over two-thirds of the patents related to Arikan’s solution… Had the United States been able to retain Arikan—simply by allowing him to stay in the country instead of making his visa contingent on immediately finding a sponsor for his work—this history might well have been different.”

By creating a narrowly tailored AI National Security Entrepreneur Visa, the United States has a unique opportunity to recruit founders in a field deemed “critical and emerging” by the White House and help the nation maintain both its economic and national security competitiveness. And while many are concerned about the potential economic dislocation from AI, one way to mitigate such a risk is by helping entrepreneurship flourish in the United States, especially in underserved communities like those found in Opportunity Zones across every state. With hundreds or thousands of new businesses creating high-paid jobs in rural and underserved communities, Americans outside existing tech hubs of New York City and San Francisco could finally see real economic benefits of the tech boom. 

The economic potential for such a visa is tremendous. According to a 2024 report from the Center for Growth and Opportunity at Utah State University, a startup visa could have a significant impact: “Data collected at the state level suggests that when the population’s share of immigrant college graduates increases by 1 percent, patents per capita increase by 9 to 18 percent” with the report going on to say that (depending on the number of entrepreneurs brought in) “Census and industrial data predict an increase of 500,000 to 1.6 million new jobs from young start-up visa companies in the United States after 10 years of operation.”

The time for an AI startup visa is now. It will help create American jobs and revitalize local economies, cement American global leadership, and ensure that we beat China in the AI race.

Plan of Action

Create a 10-year pilot AI Entrepreneur Visa program for a select group of countries to demonstrate the potential efficacy of the visa.

The AI National Security Entrepreneur Visa will be narrowly tailored to founders from friendly nations, who have already raised significant capital for their companies from accredited American investors and are willing to physically reside in an Opportunity Zone. This will minimize risks of visa overstays and espionage while maximizing the potential economic benefits by bringing companies that have capital ready to deploy to the United States. 

Visa Characteristics

Initial Visa Application Requirements

Visa Extension Requirements

Recommended Timeline

Miscellaneous Recommendations

Conclusion

America is in a race for global talent, especially when it comes to AI. The data shows that the majority of leading AI companies in America were created with at least one immigrant founder—but our immigration system makes it incredibly difficult for experts to come and build their companies in America, a serious strategic disadvantage compared to China, which produces dramatically more STEM graduates. By creating an AI National Security Entrepreneur Visa targeting high-skill founders who have already raised funds, Congress can quickly close the gap with China, bringing the best and brightest from around the world to America to build their companies. Not only will this help create jobs across the United States, it will make America the undisputed superpower in AI, allowing us to set standards and control the development of a technology whose impact may surpass those of all other innovations in recent decades.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why are existing visa programs like EB-5, H1-B, or the J-1 insufficient for AI startups?
Existing visas are not ideal due to unclear guidelines for what determines a significant investment, not having significant personal funds at their disposal, ownership requirements that are out of sync with the norms for venture backed startups, not having an employer-employee relationship, and a host of other issues. The National Security AI Visa allows entrepreneurs to move regardless of personal wealth, as long as they have raised funding from accredited American investors, provides a pathway to citizenship so founders know they can continue building their companies in America, and presents a more streamlined pathway for startup founders to move to the United States, making the visa more accessible and attractive. Given the economic and national security importance of AI, creating a standalone visa will have a disproportionate impact on attracting talent from the field to America at a critical time, likely with significant economic and national security benefits.
Is the United States really at serious risk of missing out on top talent?

Yes. Take it from the founder of Yahoo and naturalized American citizen, Jerry Yang, who said “If I had to worry about a visa, maybe Yahoo wouldn’t have gotten started,” and that “There are more places around the world where entrepreneurship has taken off… so founders have more choices. And to the extent that our immigration policies are not so welcoming, people don’t want to come.”

How does this compare to other legislative proposals, such as the 2021 LIKE Act or The Startup Act?
The AI Entrepreneur Act is significantly narrower in scope than other proposals, which generally have not been restricted by nation or industry and often had additional requirements related to entrepreneurship and research unrelated to the visa itself. Additionally, the AI National Security Entrepreneur Visa only supports entrepreneurs who have already raised funding and who agree to reside and build their business in an Opportunity Zone, ensuring that jobs for Americans are created and spread outside of existing tech hubs.
What is an Opportunity Zone, and why should entrepreneurs be required to reside in one?

Created under President Trump’s Tax Cuts and Jobs Act, Opportunity Zones are designated areas across all 50 states deemed economically distressed by the Internal Revenue Service. Many previous technology booms have created outsized benefits to existing wealthy tech hubs like San Francisco and New York City thanks to positive agglomeration and network effects. By pushing entrepreneurs to found their business in a Opportunity Zone, which by its nature is an economically distressed area, the visa will help bring new jobs and opportunities to areas that previously had a difficult time attracting tech entrepreneurs and high-growth startups.

Is there another way to provide more power to the states and local jurisdictions for immigration rather than creating another federally administered program?

The Economic Innovation Group has written extensively about the concept of a “heartland visa,” which would allow counties to decide on specific new immigration pathways based on their distinct needs. The AI Entrepreneur Visa could be structured similarly, with states or localities opting in to the program and deciding the number and type of AI entrepreneurs they would like to bring to their communities.

Can the visa be further narrowed? If so, what options are there?

Yes. Some options to further narrow the visa:



  • Decrease the number of countries eligible for the pilot visa program.

  • Create a cap for the number of potential founders per year (recommended minimum of 10,000 to create a sample size large enough for an economic impact assessment).

  • Create a mandatory sunset for the program, requiring it to be renewed after five or 10 years.

  • Increase equity ownership requirements or implement a maximum number of applicants per company.

  • Allow individual states or counties to opt in to the program rather than it being available for the entire nation’s Opportunity Zones at the start.

Can the visa be further expanded? If so, what options are there?

Yes. Some options to further expand the visa:



  • Increase the number of countries eligible to apply for the visa.

  • Expand the technologies/industries eligible for the visa.

  • Decrease or eliminate the threshold for the amount of funds raised to be eligible.

  • Decrease or eliminate equity ownership requirements.

  • The company’s primary physical place of business must be shown to be within an Opportunity Zone.

Is there a cost to implementing the visa program?
No. The program can be set up as a fee-based application process where applicants pay a fee large enough to offset operating costs, meaning that no costs will be incurred by taxpayers.
Are there ways to offset the number of net new high-skilled immigrants coming into the country?
Yes. One could consider lowering the cap of visas for existing programs like the EB-5 Investment Visa as an offset.
Any wild high-impact ideas that could be added to the visa?
Adding an “Operation Paperclip” style initiative to the visa that gives the secretaries of Department of Defense and Commerce the authority to proactively create a list every year of the top ~1,000 people from around the world they think would be most impactful for U.S. national and economic security and proactively offer them a green card (assuming they pass a background check after accepting the offer). This could be used for scientists, executives, even top workers in critical industries like semiconductor fabrication and design.

Improving Health Equity Through AI

Clinical decision support (CDS) artificial intelligence (AI) refers to systems and tools that utilize AI to assist healthcare professionals in making more informed clinical decisions. These systems can alert clinicians to potential drug interactions, suggest preventive measures, and recommend diagnostic tests based on patient data. Inequities in CDS AI pose a significant challenge to healthcare systems and individuals, potentially exacerbating health disparities and perpetuating an already inequitable healthcare system. However, efforts to establish equitable AI in healthcare are gaining momentum, with support from various governmental agencies and organizations. These efforts include substantial investments, regulatory initiatives, and proposed revisions to existing laws to ensure fairness, transparency, and inclusivity in AI development and deployment. 

Policymakers have a critical opportunity to enact change through legislation, implementing standards in AI governance, auditing, and regulation. We need regulatory frameworks, investment in AI accessibility, incentives for data collection and collaboration, and regulations for auditing and governance of AI systems used in CDS systems/tools. By addressing these challenges and implementing proactive measures, policymakers can harness AI’s potential to enhance healthcare delivery and reduce disparities, ultimately promoting equitable access to quality care for everyone.

Challenge and Opportunity 

AI has the potential to revolutionize healthcare, but its misuse and unequal access can lead to unintended dire consequences. For instance, algorithms may inadvertently favor certain demographic groups, allocating resources disproportionately and deepening disparities. Efforts to establish equitable AI in healthcare have seen significant momentum and support from various governmental agencies and organizations, specifically regarding medical devices. The White House recently announced substantial investments, including $140 million for the National Science Foundation (NSF) to establish institutes dedicated to assessing existing generative AI (GenAI) systems. While not specific to healthcare, President Biden’s blueprint for an “AI Bill of Rights” outlines principles to guide AI design, use, and deployment, aiming to protect individuals from its potential harms. The Food and Drug Administration (FDA) has also taken steps by releasing a beta version of its regulatory framework for medical device AI used in healthcare. The Department of Health and Human Services (DHHS) has proposed revisions to Section 1557 of the Patient Protection and Affordable Care Act, which would explicitly prohibit discrimination in the use of clinical algorithms to support decision-making in covered entities. 

How Inequities in CDS AI Hurt Healthcare Delivery

Exacerbate and Perpetuate Health Disparities

The inequitable use of AI has the potential to exacerbate health disparities. Studies have revealed how population health management algorithms, which proxy healthcare needs with costs, allocate more care to white patients than to Black patients, even when health needs are accounted for. This disparity arises because the proxy target, correlated with access to and use of healthcare services, tends to identify frequent users of healthcare services, who are disproportionately less likely to be Black patients due to existing inequities in healthcare access. Inequitable AI perpetuates data bias when trained on skewed or incomplete datasets, inheriting and reinforcing the biases through algorithmic decisions, thereby deepening existing disparities and hindering efforts to achieve fairness and equity in healthcare delivery.

Increased Costs

Algorithms trained on biased datasets may exacerbate disparities by misdiagnosing or overlooking conditions prevalent in marginalized communities, leading to unnecessary tests, treatments, and hospitalizations and driving up costs. Health disparities, estimated to contribute $320 billion in excess healthcare spending, are compounded by the uneven adoption of AI in healthcare. The unequal access to AI-driven services widens gaps in healthcare spending, with affluent communities and resource-rich health systems often pioneering AI technologies, leaving underserved areas behind. Consequently, delayed diagnoses and suboptimal treatments escalate healthcare spending due to preventable complications and advanced disease stages. 

Decreased Trust

The unequal distribution of AI-driven healthcare services breeds skepticism within marginalized communities. For instance, in one study, an algorithm demonstrated statistical fairness in predicting healthcare costs for Black and white patients, but disparities emerged in service allocation, with more white patients receiving referrals despite similar sickness levels. This disparity undermines trust in AI-driven decision-making processes, ultimately adding to mistrust in healthcare systems and providers.

How Bias Infiltrates CDS AI

Lack of Data Diversity and Inclusion

The datasets used to train AI models often mirror societal and healthcare inequities, propagating biases present in the data. For instance, if a model is trained on data from a healthcare system where certain demographic groups receive inferior care, it will internalize and perpetuate those biases. Compounding the issue, limited access to healthcare data leads AI researchers to rely on a handful of public databases, contributing to dataset homogeneity and lacking diversity. Additionally, while many clinical factors have evidence-based definitions and data collection standards, attributes that often account for variance in healthcare outcomes are less defined and more sparsely collected. As such, efforts to define and collect these attributes and promote diversity in training datasets are crucial to ensure the effectiveness and fairness of AI-driven healthcare interventions.

Lack of Transparency and Accountability

While AI systems are designed to streamline processes and enhance decision-making across healthcare, they also run the risk of inadvertently inheriting discrimination from their human creators and the environments from which they draw data. Many AI decision support technologies also struggle with a lack of transparency, making it challenging to fully comprehend and appropriately use their insights in a complex, clinical setting. By gaining clear visibility into how AI systems reach conclusions and establishing accountability measures for their decisions, the potential for harm can be mitigated and fairness promoted in their application. Transparency allows for the identification and remedy of any inherited biases, while accountability incentivizes careful consideration of how these systems may negatively or disproportionately impact certain groups. Both are necessary to build public trust that AI is developed and used responsibly.

Algorithmic Biases

The potential for algorithmic bias to permeate healthcare AI is significant and multifaceted. Algorithms and heuristics used in AI models can inadvertently encode biases that further disadvantage marginalized groups. For instance, an algorithm that assigns greater importance to variables like income or education levels may systematically disadvantage individuals from socioeconomically disadvantaged backgrounds. 

Data scientists can adjust algorithms to reduce AI bias by tuning hyperparameters that optimize decision thresholds. These thresholds for flagging high-risk patients may need adjustment for specific groups to balance accuracy. Regular monitoring ensures thresholds address emerging biases over time. In addition, fairness-aware algorithms can apply statistical parity, where protected attributes like race or gender do not predict outcomes. 

Unequal Access

Unequal access to AI technology exacerbates existing disparities and subjects the entire healthcare system to heightened bias. Even if an AI model itself is developed without inherent bias, the unequal distribution of access to its insights and recommendations can perpetuate inequities. When only healthcare organizations that can afford advanced AI for CDS leverage these tools, their patients enjoy the advantages of improved care that remain inaccessible to disadvantaged groups. Federal policy initiatives must prioritize equitable access to AI by implementing targeted investments, incentives, and partnerships for underserved populations. By ensuring that all healthcare entities, regardless of financial resources, have access to AI technologies, policymakers can help mitigate biases and promote fairness in healthcare delivery.

Misuse

The potential for bias in healthcare through the misuse of AI extends beyond the composition of training datasets to encompass the broader context of AI application and utilization. Ensuring the generalizability of AI predictions across diverse healthcare settings is as imperative as equity in the development of algorithms. It necessitates a comprehensive understanding of how AI applications will be deployed and whether the predictions derived from training data will effectively translate to various healthcare contexts. Failure to consider these factors may lead to improper use or abuse of AI insights. 

Opportunity

Urgent policy action is essential to address bias, promote diversity, increase transparency, and enforce accountability in CDS AI systems. By implementing responsible oversight and governance, policymakers can harness the potential of AI to enhance healthcare delivery and reduce costs, while also ensuring fairness and inclusion. Regulations mandating the auditing of AI systems for bias and requiring explainability, auditing, and validation processes can hold organizations accountable for the ethical development and deployment of healthcare technologies. Furthermore, policymakers can establish guidelines and allocate funding to maximize the benefits of AI technology while safeguarding vulnerable groups. With lives at stake, eliminating bias and ensuring equitable access must be a top priority, and policymakers must seize this opportunity to enact meaningful change. The time for action is now.

Plan of Action

The federal government should establish and implement standards in AI governance and auditing for algorithms directly influencing diagnosis, treatment, and access to care of patients. These efforts should address and measure issues such as bias, transparency, accountability, and fairness. They should be flexible enough to accommodate advancements in AI technology while ensuring that ethical considerations remain paramount. 

Regulate Auditing and Governance of AI

The federal government should implement a detailed auditing framework for AI in healthcare, beginning with stringent pre-deployment evaluations that require rigorous testing and validation against established industry benchmarks. These evaluations should thoroughly examine data privacy protocols to ensure patient information is securely handled and protected. Algorithmic transparency must be prioritized, requiring developers to provide clear documentation of AI decision-making processes to facilitate understanding and accountability. Bias mitigation strategies should be scrutinized to ensure AI systems do not perpetuate or exacerbate existing healthcare disparities. Performance reliability should be continuously monitored through real-time data analysis and periodic reviews, ensuring AI systems maintain accuracy and effectiveness over time. Regular audits should be mandated to verify ongoing compliance, with a focus on adapting to evolving standards and incorporating feedback from healthcare professionals and patients. AI algorithms evolve due to shifts in the underlying data, model degradation, and changes to application protocols. Therefore, routine auditing should occur at a minimum of annually. 

With nearly 40% of Americans receiving benefits under a Medicare or Medicaid program, and the tremendous growth and focus on value-based care, the Centers for Medicare & Medicaid Services (CMS) is positioned to provide the catalyst to measure and govern equitable AI. Since many health systems and payers leverage models across multiple other populations, this could positively affect the majority of patient care. Both the companies making critical decisions and those developing the technology should be obliged to assess the impact of decision processes and submit select impact-assessment documentation to CMS. 

For healthcare facilities participating in CMS programs, this mandate should be included as a Condition of Participation. Through this same auditing process, the federal government can capture insight into the performance and responsibility of AI systems. These insights should be made available to healthcare organizations throughout the country to increase transparency and quality between AI partners and decision-makers. This will help the Department of Health and Human Services (HHS) meet the “Promote Trustworthy AI Use and Development” pillar of its AI strategy (Figure 1).

Figure 1. HHS AI Strategy

Congress must enforce these systems of accountability for advanced algorithms. Such work could be done by amending and passing the 2023 Algorithmic Accountability Act. This proposal mandates that companies evaluate the effects of automating critical decision-making processes, including those already automated. However, it fails to make these results visible to the organizations that leveraging these tools. An extension should be added to make results available to governing bodies and member organizations, such as the American Hospital Association (AHA). 

Invest in AI Accessibility and Improvement

AI that integrates the social and clinical risk factors that influence preventive care could be beneficial in managing health outcomes and resource allocation, specifically for facilities providing care to mostly rural areas and patients. While organizations serving large proportions of marginalized patients may have access to nascent AI tools, it is very likely they are inadequate given they weren’t trained with data adequately representing this population. Therefore, the federal government should allocate funding to support AI access for healthcare organizations serving higher percentages of vulnerable populations. Initial support should stem from subsidies to AI service providers that support safety net and rural health providers. 

The Health Resources and Services Administration should deploy strategic innovation funding to federally qualified health centers and rural health providers to contribute to and consume equitable AI. This could include funding for academic institutions, research organizations, and private-sector partnerships focused on developing AI algorithms that are fair, transparent, and unbiased specific for these populations. 

Large language models (LLM) and GenAI solutions are being rapidly adopted in CDS tooling, providing clinicians with an instant second opinion in diagnostic and treatment scenarios. While these tools are powerful, they are not infallible and pose a risk without the ability to evolve. Therefore, research regarding AI self-correction should be a focus of future policy. Self-correction is the ability for an LLM or GenAI to identify and rectify errors without external or human intervention. Mastering the ability for these complex engines to recognize possible life-threatening errors would be crucial in their adoption and application. Healthcare agencies, such as the Agency for Healthcare Research and Quality (AHRQ) and the Office of the National Coordinator for Health Information Technology, should fund and oversee research for AI self-correction specifically leveraging clinical and administrative claims data. This should be an extension of either of the following efforts:

Much like the Breakthrough Device Program, AI that can prove it decreases health disparities and/or increases accessibility can be fast-tracked through the audit process and highlighted as “best-in-class.”

Incentivize Data Collection and Collaboration

The newly released “Driving U.S. Innovation in Artificial Intelligence” roadmap considers healthcare a high-impact area for AI and makes specific recommendations for future “legislation that supports further deployment of AI in health care and implements appropriate guardrails and safety measures to protect patients,… and promoting the usage of accurate and representative data.” While auditing and enabling accessibility in healthcare AI, the government must ensure that the path to build equity into AI solutions does not remain an obstacle. This entails improved data collection and data sharing to ensure that AI algorithms are trained on diverse and representative datasets. As the roadmap declares, there must be “support the NIH in the development and improvement of AI technologies…with an emphasis on making health care and biomedical data available for machine learning and data science research while carefully addressing the privacy issues raised by the use of AI in this area.” 

These data exist across the healthcare ecosystem, and therefore decentralized collaboration can enable a more diverse corpus of data to be available to train AI. This may involve incentivizing healthcare organizations to share anonymized patient data for research purposes while ensuring patient privacy and data security. This incentive could come in the form of increased reimbursement from CMS for particular services or conditions that involve collaborating parties.

To ensure that diverse perspectives are considered during the design and implementation of AI systems, any regulation handed down from the federal government should not only encourage but evaluate the diversity and inclusivity in AI development teams. This can help mitigate biases and ensure that AI algorithms are more representative of the diverse patient populations they serve. This should be evaluated by accrediting parties such as The Joint Commission (a CMS-approved accrediting organization) and their Healthcare Equity Certification.

Conclusion

Achieving health equity through AI in CDS requires concerted efforts from policymakers, healthcare organizations, researchers, and technology developers. AI’s immense potential to transform healthcare delivery and improving outcomes can only be realized if accompanied by measures to address biases, ensure transparency, and promote inclusivity. As we navigate the evolving landscape of healthcare technology, we must remain vigilant in our commitment to fairness and equity so that AI can serve as a tool for empowerment rather than perpetuating disparities. Through collective action and awareness, we can build a healthcare system that truly leaves no one behind.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
What are some challenges in auditing AI systems for bias and accountability?
AI systems often function as black boxes with intricate algorithms, making them complex and opaque to the end user. Establishing guidelines that respect the proprietary nature and complexity of these capabilities will be necessary. Privacy-preserving evaluation methods and secure reporting will help build trust with the developers of these CDS AI systems.
How can healthcare organizations be incentivized to share anonymized patient data for AI research while ensuring patient privacy?
A multifaceted approach will be essential. Regulatory frameworks and clear guidelines can build trust among developers and users of CDS AI, while financial incentives like funding, grants, and revenue sharing can motivate participation. Advanced anonymization techniques and secure data platforms should be required to ensure privacy.
What specific measures can policymakers implement to ensure that AI technology and proposed auditing systems are being leveraged accordingly?
Mandatory reporting and transparency requirements will be key, as will establishing independent oversight bodies. Enforcing compliance with penalties for noncompliance keeps practices current. Additionally, investing in training programs and resources for policymakers, auditors, and industry professionals will bolster the auditing infrastructure.

Supporting States in Balanced Approaches to AI in K-12 Education

Congress must ensure that state education agencies (SEAs) and local education agencies (LEAs) are provided a gold-standard policy framework, critical funding, and federal technical assistance that supports how they govern, map, measure, and manage the deployment of accessible and inclusive artificial intelligence (AI) in educational technology across all K-12 educational settings. Legislation designed to promote access to an industry-designed and accepted policy framework will help guide SEAs and LEAs in their selection and use of innovative and accessible AI designed to align with the National Educational Technology Plan’s (NETP) goals and reduce current and potential divides in AI.

Although the AI revolution is definitively underway across all sectors of U.S. society, questions still remain about AI’s accuracy, accessibility, how its broad application can influence how students are represented within datasets, and how educators use AI in K-12 classrooms. There is both need and capacity for policymakers to support and promote thoughtful and ethical integration of AI in education and to ensure that its use complements and enhances inclusive teaching and learning while also protecting student privacy and preventing bias and discrimination. Because no federal legislation currently exists that aligns with and accomplishes these goals, Congress should develop a bill that targets grant funds and technical assistance to states and districts so they can create policy that is backed by industry and designed by educators and community stakeholders.

Challenge and Opportunity

With direction provided by Congress, the U.S. Department of Commerce, through the National Institute of Standards and Technology (NIST), has developed the Artificial Intelligence Risk Management Framework (NIST Framework). Given that some states and school districts are in the early stages of determining what type of policy is needed to comprehensively integrate AI into education while also addressing both known and potential risks, the hallmark guidance can serve as the impetus for developing legislation and directed-funding designed to help. 

A new bill focused on applying the NIST Framework to K-12 education could create both a new federally funded grant program and a technical assistance center designed to help states and districts infuse AI into accessible education systems and technology, and also prevent discrimination and/or data security breaches in teaching and learning. As noted in the NIST Framework:

AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and uses with intended aim and values. Core concepts in responsible AI emphasize human centricity, social responsibility, and sustainability. AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust.

In a recent national convening hosted by the U.S. Department of Education, Office of Special Education Programs, national leaders in education technology and special education discussed several key themes and questions, including: 

Participants emphasized the importance of addressing the digital divide associated with AI and leveraging AI to help improve accessibility for students, addressing AI design principles to help educators use AI as a tool to improve student engagement and performance, and assuring guidelines and policies are in use to protect student confidentiality and privacy. Stakeholders also specifically and consistently noted “the need for policy and guidance on the use of AI in education and, overall, the convening emphasized the need for thoughtful and ethical integration of AI in education, ensuring that it complements and enhances the learning experience,” according to notes from participants.”

Given the rapid advancement of innovation in education tools, states and districts are urgently looking for ways to invest in AI that can support teaching and learning. As reported in fall 2023

Just two states—California and Oregon—have offered official guidance to school districts on using AI [in Fall 2023]. Another 11 states are in the process of developing guidance, and the other 21 states who have provided details on their approach do not plan to provide guidance on AI for the foreseeable future. The remaining states—17, or one-third—did not respond [to requests for information] and do not have official guidance publicly available.

While states and school districts are in various stages of developing policies around the use of AI in K-12 classrooms, to date there is no federally supported option that would help them make cohesive plans to invest in and use AI in evidence-based teaching and to support the administrative and other tasks educators have outside of instructional time. A major investment for education could leverage the expertise of state and local experts and encourage collaboration around breakthrough innovations to address both the opportunities and challenges. There is general agreement that investments in and support for AI within K-12 classrooms will spur educators, students, parents, and policymakers to come together to consider what skills both educators and students need to navigate and thrive in a changing educational landscape and changing economy. Federal investments in AI – through the application and use of the NIST Framework – can help ensure that educators have the tools to teach and support the learning of all U.S. learners. To that end, any federal policy initiative must also ensure that state, federal, and local investments in AI do not overlook the lessons learned by leading researchers who have spent years studying ways to infuse AI into America’s classrooms. As noted by Satya Nitta, former head researcher at IBM, 

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can represents a profound misunderstanding of what AI is actually capable of… We missed something important. At the heart of education, at the heart of any learning, is [human] engagement.

Additionally, while current work led by Kristen DiCerbo at Khan Academy shows promise in the use of ChatGPT in Khanmingo, DiCerbo admits that their online 30-minute tutoring program, which utilizes AI, “is a tool in your toolbox” and is “not a solution to replacing humans” in the classroom. “In one-to-one teaching, there is an element of humanity that we have not been able to replicate—and probably should not try to replicate—in artificial intelligence. AI cannot respond to emotion or become your friend.”

With these data in mind, there is a great need and timely opportunity to support states and districts in developing flexible standards based on quality evidence. The NIST Framework – which was designed as a voluntary guide – is also “intended to be practical and adaptable.” State and district educators would benefit from targeted federal legislation that would elevate the Framework’s availability and applicability to current and future investments in AI in K-12 educational settings and to help ensure AI is used in a way that is equitable, fair, safe, and supportive of educators as they seek to improve student outcomes. Educators need access to industry-approved guidance, targeted grant funding, and technical assistance to support their efforts, especially as AI technologies continue to develop. Such state- and district-led guidance will help AI be operationalized in flexible ways to support thoughtful development of policies and best practices that will ensure school communities can benefit from AI, while also protecting students from potential harms.

Plan of Action

Federal legislation would provide funding for grants and technical assistance to states and districts in planning and implementing comprehensive AI policy-to-practice plans utilizing the NIST Framework to build a locally designed plan to support and promote thoughtful and ethical integration of AI in education and to ensure that its use complements and enhances inclusive teaching, accessible learning, and an innovation-driven future for all.

Legislative Specifications

Sec. I: Grant Program to States

Purposes: 

(A) To provide grants to State Education Agencies (SEA/State) to guide and support local education agencies (LEA/district) in the planning, development, and investment in AI in K-12 educational settings; ensuring AI is used in a way that is equitable, fair, safe, and can support educators and help improve student outcomes. 

(B) To provide federal technical assistance (TA) to States and districts in the planning, development, and investments in AI in K-12 education and to evaluate State use of funds. 

Each LEA/district must be representative of the students and the school communities across the state in size, demographics, geographic locations, etc. 

Other requirements for state/district planning are:

Timeline

Sec. 2: Federal TA Center: To assist states in planning and implementing state-designed standards for AI in education.

Cost: 6% set-aside of overall appropriated annual funding

The TA center must achieve, at a minimum, the following expected outcomes:

(a) Increased capacity of SEAs to develop useful guidance via the NIST Framework, the National Education Technology Plan of 2024 and recommendations via the Office of Education Technology in the use of artificial intelligence (AI) in schools to support the use of AI for K-12 educators and for K-12 students in the State and the LEAs of the State;

(b) Increased capacity of SEAs, and LEAs to use new State and LEA-led guidance that ensures AI is used in a way that is equitable, fair, safe, protects against bias and discrimination of all students, and can support educators and help improve student outcomes. 

(c) Improved capacity of SEAs to assist LEAs, as needed, in using data to drive decisions related to the use of K-12 funds to AI is used in a way that is equitable, fair, safe, and can support educators and help improve student outcomes. 

(d) Collect data on these and other areas as outlined by the Secretary. 

Timeline: TA Center is funded by the Secretary upon congressional action to fund the grant opportunity. 

Conclusion

State and local education agencies need essential tools to support their use of accessible and inclusive AI in educational technology across all K-12 educational settings. Educators need access to industry-approved guidance, targeted grant funding, and technical assistance to support their efforts. It is essential that AI is operationalized in varying degrees and capacities to support thoughtful development of policies and best practices that ensure school communities can benefit from AI–while also being protected from its potential harms—now and in the future.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

An Early Warning System for AI-Powered Threats to National Security and Public Safety

In just a few years, state-of-the-art artificial intelligence (AI) models have gone from not reliably counting to 10 to writing software, generating photorealistic videos on demand, combining language and image processing to guide robots, and even advising heads of state in wartime. If responsibly developed and deployed, AI systems could benefit society enormously. However, emerging AI capabilities could also pose severe threats to public safety and national security. AI companies are already evaluating their most advanced models to identify dual-use capabilities, such as the capacity to conduct offensive cyber operations, enable the development of biological or chemical weapons, and autonomously replicate and spread. These capabilities can arise unpredictably and undetected during development and after deployment. 

To better manage these risks, Congress should set up an early warning system for novel AI-enabled threats to provide defenders maximal time to respond to a given capability before information about it is disclosed or leaked to the public. This system should also be used to share information about defensive AI capabilities. To develop this system, we recommend:

Challenge and Opportunity

In just the past few years, advanced AI has surpassed human capabilities across a range of tasks. Rapid progress in AI systems will likely continue for several years, as leading model developers like OpenAI and Google DeepMind plan to spend tens of billions of dollars to train more powerful models. As models gain more sophisticated capabilities, some of these could be dual-use, meaning they will “pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters”—but in some cases may also be applied to defend against serious risks in those domains. 

New AI capabilities can emerge unexpectedly. AI companies are already evaluating models to check for dual-use capabilities, such as the capacity to enhance cyber operations, enable the development of biological or chemical weapons, and autonomously replicate and spread. These capabilities could be weaponized by malicious actors to threaten national security or could lead to brittle, uncontrollable systems that cause severe accidents. Despite the use of evaluations, it is not clear what should happen when a dual-use capability is discovered. 

An early-warning system would allow the relevant actors to access evaluation results and other details of dual-use capability reports to strengthen responses to novel AI-powered threats. Various actors could take concrete actions to respond to risks posed by dual-use AI capabilities, but they need lead time to coordinate and develop countermeasures. For example, model developers could mitigate immediate risks by restricting access to models. Governments could work with private-sector actors to use new capabilities defensively or employ enhanced, targeted export controls to deny foreign adversaries from accessing strategically relevant capabilities.

A warning system should ensure secure information flow between three types of actors:

  1. Finders: the parties that can initially identify dual-use capabilities in models. These include AI company staff, government evaluators such as the U.S. AI Safety Institute (USAISI), contracted evaluators and red-teamers, and independent security researchers.
  2. Coordinators: the parties that provide the infrastructure for collecting, triaging, and directing dual AI capability reports.
  3. Defenders: the parties that could take concrete actions to mitigate threats from dual-use capabilities or leverage them for defensive purposes, such as advanced AI companies and various government agencies.

While this system should cover a variety of finders, defenders, and capability domains, one example of early warning and response in practice might look like the following: 

The current environment has some parts of a functional early-warning system, such as reporting requirements for AI developers described in Executive Order 14110, and existing interagency mechanisms for information-sharing and coordination like the National Security Council and the Vulnerabilities Equities Process.

However, gaps exist across the current system:

  1. There is a lack of clear intake channels and standards for capability reporting to the government outside of mandatory reporting under EO14110. Also, parts of the Executive Order that mandate reporting may be overturned in the next administration, or this specific use of the Defense Production Act (DPA) could be successfully struck down in the courts. 
  2. Various legal and operational barriers mean that premature public disclosure, or no disclosure at all, is likely to happen. This might look like an independent researcher publishing details about a dangerous offensive cyber capability online, or an AI company failing to alert appropriate authorities due to concerns about trade secret leakage or regulatory liability. 
  3. BIS intakes mandatory dual-use capability reports, but it is not tasked to be a coordinator and is not adequately resourced for that role, and information-sharing from BIS to other parts of government is limited. 
  4. There is also a lack of clear, proactive ownership of response around specific types of AI-powered threats. Unless these issues are resolved, AI-powered threats to national security and public safety are likely to arise unexpectedly without giving defenders enough lead time to prepare countermeasures. 

Plan of Action

Improving the U.S. government’s ability to rapidly respond to threats from novel dual-use AI capabilities requires actions from across government, industry, and civil society. The early warning system detailed below draws inspiration from “coordinated vulnerability disclosure” (CVD) and other information-sharing arrangements used in cybersecurity, as well as the federated Sector Risk Management Agency (SRMA) approach used to organize protections around critical infrastructure. The following recommended actions are designed to address the issues with the current disclosure system raised in the previous section.

First, Congress should assign and fund an agency office within the BIS to act as a coordinator–an information clearinghouse for receiving, triaging, and distributing reports on dual-use AI capabilities. In parallel, Congress should require developers of advanced models to report dual-use capability evaluations results and other safety critical information to BIS (more detail can be found in the FAQ). This creates a clear structure for finders looking to report to the government and provides capacity to triage reports and figure out what information should be sent to which working groups.

This coordinating office should establish operational and legal clarity to encourage voluntary reporting and facilitate mandatory reporting. This should include the following:

BIS is suited to house this function because it already receives reports on dual-use capabilities from companies via DPA authority under EO14110. Additionally, it has in-house expertise on AI and hardware from administering export controls on critical emerging technology, and it has relationships with key industry stakeholders, such as compute providers. (There are other candidates that could house this function as well. See the FAQ.)

To fulfill its role as a coordinator, this office would need an initial annual budget of $8 million to handle triaging and compliance work for an annual volume of between 100 and 1,000 dual-use capability reports.2 We provide a budget estimate below:

Budget itemCost (USD)
Staff (15 FTE)$400,000 x 15 = $6 million
Technology and infrastructure (e.g., setting up initial reporting and information-sharing systems)$1.5 million
Communications and outreach (e.g., organizing convenings of working group lead agencies)$300,000
Training and workforce development$200,000
Total$8 million

The office should leverage the direct hire authority outlined by Office of Personnel Management (OPM) and associated flexible pay and benefits arrangements to attract staff with appropriate AI expertise. We expect most of the initial reports would come from 5 to 10 companies developing the most advanced models. Later, if there’s more evidence that near-term systems have capabilities with national security implications, then this office could be scaled up adaptively to allow for more fine-grained monitoring (see FAQ for more detail).

Second, Congress should task specific agencies to lead working groups of government agencies, private companies, and civil society to take coordinated action to mitigate risks from novel threats. These working groups would be responsible for responding to threats arising from reported dual-use AI capabilities. They would also work to verify and validate potential threats from reported dual-use capabilities and develop incident response plans. Each working group would be risk-specific and correspond to different risk areas associated with dual-use AI capabilities:

This working group structure enables interagency and public-private coordination in the style of SRMAs and Government Coordination Councils (GCCs) used for critical infrastructure protection. This approach distributes responsibilities for AI-powered threats across federal agencies, allowing each lead agency to be appointed based on the expertise they can leverage to deal with specific risk areas. For example, the Department of Energy (specifically the National Nuclear Security Administration) would be an appropriate lead when it comes to the intersection of AI and nuclear weapons development. In cases of very severe and pressing risks, such as threats of hundreds or thousands of fatalities, the responsibility for coordinating an interagency response should be escalated to the President and the National Security Council system.

Conclusion

Dual-use AI capabilities can amplify threats to national security and public safety but can also be harnessed to safeguard American lives and infrastructure. An early-warning system should be established to ensure that the U.S. government, along with its industry and civil society partners, has maximal time to prepare for AI-powered threats before they occur. Congress, working together with the executive branch, can lay the foundation for a secure future by establishing a government coordinating office to manage the sharing of safety-critical across the ecosystem and tasking various agencies to lead working groups of defenders focused on specific AI-powered threats.

The longer research report this memo is based on can be accessed here.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How does this proposal fit into the existing landscape of AI governance?
This plan builds off of earlier developments in the area of AI safety testing and evaluations. First, the early-warning system would concretely connect dual-use capability evaluations with coordinated risk mitigation efforts. USAISI is set to partner with its United Kingdom equivalent to advance measurement science for AI safety and conduct safety evaluations. The Presidential Budget FY2025 makes requests for additional funds going to USAISI and DOE to develop testbeds for AI security evaluations. EO 14110 mandates reporting from companies to the government on safety test results and other safety-relevant information concerning dual-use foundation models. This early-warning system uses this fundamental risk assessment work and improved visibility into model safety to concretely reduce risk.
Will this proposal stifle innovation and overly burden companies?

This plan recommends that companies developing and deploying dual-use foundation models be mandated to report safety-critical information to specific government offices. However, we expect these requirements to only apply to a few large tech companies that would be working with models that fulfill specific technical conditions. A vast majority of businesses and models would not be subject to mandatory reporting requirements, though they are free to report relevant information voluntarily.


The few companies that are required to report should have the resources to comply. An important consideration behind our plan is to, where possible and reasonable, reduce the legal and operational friction around reporting critical information for safety. This can be seen in our recommendation that relevant parties from industry and civil society work together to develop reporting standards for dual-use capabilities. Also, we suggest that the coordinating office should establish operational and legal clarity to encourage voluntary reporting and facilitate mandatory reporting, which is done with industry and other finder concerns in mind.


This plan does not place restrictions on how companies conduct their activities. Instead, it aims to ensure that all parties that have equities and expertise in AI development have the information needed to work together to respond to serious safety and security concerns. Instead of expecting companies to shoulder the responsibility of responding to novel dangers, the early-warning system distributes this responsibility to a broader set of capable actors.

What if EO 14110’s reporting requirements are struck down, and there is no equivalent statutory reporting requirement from legislation?
In the case that broader mandatory reporting requirements are not enshrined in law, there are alternative mechanisms to consider. First, companies may still make voluntary disclosures to the government, as some of the most prominent AI companies agreed to do under the White House Voluntary Commitments from September 2023. There is an opportunity to create more structured reporting agreements between finders and the government coordinator by using contractual mechanisms in the form of Information Sharing and Access Agreements, which can govern the use of dual-use capability information by federal agencies, including (for example) maintaining security and confidentiality, exempting use in antitrust actions, and implementing safeguards against unauthorized disclosure to third parties. These have been used most often by the DHS to structure information sharing with non-government parties and between agencies.
What other federal agencies could house the coordinator role? How do they compare to BIS?

Bureau of Industry and Security (BIS) already intakes reports on dual-use capabilities via DPA authority under EO 14110


Department of Commerce



  • USAISI will have significant AI safety-related expertise and also sits under Commerce

  • Internal expertise on AI and hardware from administering export controls


US AI Safety Institute (USAISI), Department of Commerce



  • USAISI will have significant AI safety-related expertise

  • Part of NIST, which is not a regulator, so there may be fewer concerns on the part of companies when reporting

  • Experience coordinating relevant civil society and industry groups as head of the AI Safety Consortium


Cybersecurity and Infrastructure Security Agency (CISA), Department of Homeland Security



  • Experience managing info-sharing regime for cyber threats that involve most relevant government agencies, including SRMAs for critical infrastructure

  • Experience coordinating with private sector

  • Located within DHS, which has responsibilities covering counterterrorism, cyber and infrastructure protection, domestic chemical, biological, radiological, and nuclear protection, and disaster preparedness and response. That portfolio seems like a good fit for work handling information related to dual-use capabilities.

  • Option of Federal Advisory Committee Act exemption for DHS Federal Advisory Committees would mean working group meetings can be nonpublic and meetings do not require representation from all industry representatives


Office of Critical and Emerging Technologies, Department of Energy (DOE)



  • Access to DOE expertise and tools on AI, including evaluations and other safety and security-relevant work (e.g., classified testbeds in DOE National Labs)

  • Links to relevant defenders within DOE, such as the National Nuclear Security Administration

  • Partnerships with industry and academia on AI

  • This office is much smaller than the alternatives, so would require careful planning and management to add this function.

Is it too early to worry about serious risks from AI models?

Based on dual-use capability evaluations conducted on today’s most advanced models, there is no immediate concern that these models can meaningfully enhance the ability of malicious actors to threaten national security or cause severe accidents. However, as outlined in earlier sections of the memo, model capabilities have evolved rapidly in the past, and new capabilities have emerged unintentionally and unpredictably.


This memo recommends initially putting in place a lean and flexible system to support responses to potential AI-powered threats. This would serve a “fire alarm” function if dual-use capabilities emerge and would be better at reacting to larger, more discontinuous jumps in dual-use capabilities. This also lays the foundation for reporting standards, relationships between key actors, and expertise needed in the future. Once there is more concrete evidence that models have major national security implications, Congress and the president can scale up this system as needed and allocate additional resources to the coordinating office and also to lead agencies. If we expect a large volume of safety-critical reports to pass through the coordinating office and a larger set of defensive actions to be taken, then the “fire alarm” system can be shifted into something involving more fine-grained, continuous monitoring. More continuous and proactive monitoring would tighten the Observe, Orient, Decide, and Act (OODA) loop between working group agencies and model developers, by allowing agencies to track gradual improvements, including from post-training enhancements.

Why focus on capabilities? Would incident reporting be better since it focuses on concrete events? What about vulnerabilities and threat information?

While incident reporting is also valuable, an early-warning system focused on capabilities aims to provide a critical function not addressed by incident reporting: preventing or mitigating the most serious AI incidents before they even occur. Essentially, an ounce of prevention is worth a pound of cure.


Sharing information on vulnerabilities to AI systems and infrastructure and threat information (e.g., information on threat actors and their tactics, techniques, and practices) is also important, but distinct. We think there should be processes established for this as well, which could be based on Information Sharing and Analysis Centers, but it is possible that this could happen via existing infrastructure for sharing this type of information. Information sharing around dual-use capabilities though is distinct to the AI context and requires special attention to build out the appropriate processes.

What role could the executive branch play?

While this memo focuses on the role of Congress, an executive branch that is interested in setting up or supporting an early warning system for AI-powered threats could consider the following actions.


Our second recommendation—tasking specific agencies to lead working groups to take coordinated action to mitigate risks from advanced AI systems—could be implemented by the president via Executive Order or a Presidential Directive.


Also, the National Institute of Standards and Technology could work with other organizations in industry and academia, such as advanced AI developers, the Frontier Model Forum, and security researchers in different risk domains, to standardize dual-use capability reports, making it easier to process reports coming from diverse types of finders. A common language around reporting would make it less likely that reported information is inconsistent across reports or is missing key decision-relevant elements; standardization may also reduce the burden of producing and processing reports. One example of standardization is narrowing down thresholds for sending reports to the government and taking mitigating actions. One product that could be generated from this multi-party process is an AI equivalent to the Stakeholder-Specific Vulnerability Categorization system used by CISA to prioritize decision-making on cyber vulnerabilities. A similar system could be used by the relevant parties to process reports coming from diverse types of finders and by defenders to prioritize responses and resources according to the nature and severity of the threat.

Should all of this be done by the government? What about a more prominent role for industry and civil society, who are at the forefront of understanding advanced AI and its risks?

The government has a responsibility to protect national security and public safety – hence their central role in this scheme. Also, many specific agencies have relevant expertise and authorities on risk areas like biological weapons development and cybersecurity that are difficult to access outside of government.


However, it is true that the private sector and civil society have a large portion of the expertise on dual-use foundation models and their risks. The U.S. government is working to develop its in-house expertise, but this is likely to take time.


Ideally, relevant government agencies would play central roles as coordinators and defenders. However, our plan recognizes the important role that civil society and industry play in responding to emerging AI-powered threats as well. Industry and civil society can take a number of actions to move this plan forward:



  • An entity like the Frontier Model Forum can convene other organizations in industry and academia, such as advanced AI developers and security researchers in different risk domains, to standardize dual-use capability reports independent of NIST.

  • Dual-use foundation model (DUFM) developers should establish clear policies and intake procedures for independent researchers reporting dual-use capabilities.

  • DUFM developers should work to identify capabilities that could help working groups to develop countermeasures to AI threats, which can be shared via the aforementioned information-sharing infrastructure or other channels (e.g., pre-print publication).

  • In the event that a government coordinating office cannot be created, there could be an independent coordinator that fulfills a role as an information clearinghouse for dual-use AI capabilities reports. This could be housed in organizations with experience operating federally funded research and development centers like MITRE or Carnegie Mellon University’s Software Engineering Institute.

  • If it is responsible for sharing information between AI companies, this independent coordinator may need to be coupled with a safe harbor provision around antitrust litigation specifically pertaining to safety-related information. This safe harbor could be created via legislation, like a similar provision used in CISA 2015 or via a no-action letter from the Federal Trade Commission.

What is included in the reporting requirements for companies developing advanced models with potential dual-use capabilities? What companies are subject to these requirements? What information needs to be shared?

We suggest that reporting requirements should apply to any model trained using computing power greater than 1026 floating-point operations. These requirements would only apply to a few companies working with models that fulfill specific technical conditions. However, it will be important to establish an appropriate authority within law to dynamically update this threshold as needed. For example, revising the threshold downwards (e.g., to 1025) may be needed if algorithmic improvements allow developers to train more capable models with less compute or other developers devise new “scaffolding” that enables them to elicit dangerous behavior from already-released models. Alternatively, revising the threshold upwards (e.g., to 1027) may be desirable due to societal adaptation or if it becomes clear that models at this threshold are not sufficiently dangerous. The following information should be included in dual-use AI capability reports, though the specific format and level of detail will need to be worked out in the standardization process outlined in the memo:



  • Name and address of model developer

  • Model ID information (ideally standardized)

  • Indicator of sensitivity of information

  • A full accounting of the dual-use capabilities evaluations run on the model at the training and pre-deployment stages, their results, and details of the size and scope of safety-testing efforts, including parties involved

  • Details on current and planned mitigation measures, including up-to-date incident response plans

  • Information about compute used to train models that have triggered reporting (e.g., amount of compute and training time required, quantity and variety of chips used and networking of compute infrastructure, and the location and provider of the compute)


Some elements would not need to be shared beyond the coordinating office or working group lead (e.g., personal identifying information about parties involved in safety testing or specific details about incident response plans) but would be useful for the coordinating office in triaging reports.


The following information should not be included in reports in the first place since it is commercially sensitive and could plausibly be targeted for theft by malicious actors seeking to develop competing AI systems:



  • Information on model architecture

  • Datasets used in training

  • Training techniques

  • Fine-tuning techniques

Shared Classified Commercial Coworking Spaces

The legislation would establish a pilot program for the Department of Defense (DoD) to establish classified commercial shared spaces (think WeWork or hotels but for cleared small businesses and universities), professionalize industrial security protections, and accelerate the integration of new artificial intelligence (AI) technologies into actual warfighting capabilities. While the impact of this pilot program would be felt across the National Security Innovation Base, this issue is particularly pertinent to the small business and start-up community, for whom access to secure facilities is a major impediment to performing and competing for government contracts.

Challenge and Opportunity 

The process of obtaining and maintaining a facility clearance and the appropriate industrial security protections is a major burden on nontraditional defense contractors, and as a result they are often disadvantaged when it comes to performing on and competing for classified work. Over the past decade, small businesses, nontraditional defense contractors, and academic institutions have all successfully transitioned commercial solutions for unclassified government contracts. However, the barriers to entry (cost, complexity, administrative burden, timeline) to engage in classified contracts has prevented similar successes. There have been significant and deliberate policy revisions and strategic pivots by the U.S. government to ignite and accelerate commercial technologies and solutions for government use cases, but similar reforms have not reduced the significant burden these organizations face when trying to secure follow-on classified work.

For small, nontraditional defense companies and universities, creating their own classified facility is a multiyear endeavor, is often cost-prohibitive, and includes coordination among several government organizations. This makes the prospect of building their own classified infrastructure a high-risk investment with an unknown return, thus deterring many of these organizations from competing in the classified marketplace and preventing the most capable technology solutions from rapid integration into classified programs. Similarly, many government contracting officers, in an effort to satisfy urgent operational requirements, only select from vendors with existing access to classified infrastructure due to knowing the long timelines involved for new entrants getting their own facilities accredited, thus further limiting the available vendor pool and restricting what commercial technologies are available to the government.

In January 2024, the Texas National Security Review published the results of a survey of over 800 companies from the defense industrial base as well as commercial businesses, ranging from small businesses to large corporations. 44 percent ranked “accessing classified environments as the greatest barrier to working with the government.” This was amplified in March 2024 during a House Armed Services Committee hearing on “Outpacing China in Defense Innovation,” where Under Secretary for Acquisition and Sustainment William LaPlante, Under Secretary for Research and Engineering Heidi Shyu, and Defense Innovation Unit Director Doug Beck all acknowledged the seriousness of this issue. 

The current government method of approving and accrediting commercial classified facilities is based on individual customers and contracts. This creates significant costs, time delays, and inefficiencies within the system. Reforming the system to allow for a “shared” commercial model will professionalize industrial security protections and accelerate the integration of new AI technologies into actual national security capabilities. While Congress has expressed support for this concept in both the Fiscal Year 2018 National Defense Authorization Act and the Fiscal Year 2022 Intelligence Authorization Act, there has been little measurable progress with implementation. 

Plan of Action 

Congress should pass legislation to create a pilot program under the Department of Defense (DoD) to expand access to shared commercial classified spaces and infrastructure. The DoD will incur no cost for the establishment of the pilot program as there is a viable commercial market for this model.  Legislative text has been provided and will be socialized with the committees of jurisdiction and relevant congressional members offices for support.

Legislative Specifications

SEC XXX – ESTABLISHMENT OF PILOT PROGRAM FOR ACCESS TO SHARED CLASSIFIED COMMERCIAL INFRASTRUCTURE 

(a) ESTABLISHMENT. – Not later than 180 days after the date of enactment of this act, the Secretary of Defense shall establish a pilot program to streamline access to shared classified commercial infrastructure in order to:

(b) DESIGNATION. – The Secretary of Defense shall designate a principal civilian official responsible for overseeing the pilot program authorized in subsection (a)(1) and shall directly report to the Deputy Secretary of Defense.

(c) REQUIREMENTS. 

(d) DEFINITION. – In this section:

(d) ANNUAL REPORT. – Not later than 270 days after the date of the enactment of this Act and annual thereafter until 2028, the Secretary of Defense shall provide to the congressional defense committees a report on establishment of this pilot program pursuant to this section, to include:

(e) TERMINATION. – The authority to carry out this pilot program under subsection (a) shall terminate on the date that is five years after the date of enactment of this Act.

Conclusion

Congress must ensure that the nonfinancial barriers that prevent novel commercially developed AI capabilities and emerging technologies from transitioning into DoD and government use are reduced. Access to classified facilities and infrastructure continues to be a major obstacle for small businesses, research institutions, and nontraditional defense contractors working with the government. This pilot program will ensure reforms are initiated that reduce these barriers, professionalize industrial security protections, and accelerate the integration of new AI technologies into actual national security capabilities.

A National Center for AI in Education

There are immense opportunities associated with artificial intelligence (AI), yet it is important to vet the tools, establish threat monitoring, and implement appropriate regulations to guide the integration of AI into an equitable education system. Generative AI in particular is already being used in education, through human resource talent acquisition, predictive systems, personalized learning systems to promote students’ learning, automated assessment systems to support teachers in evaluating what students know, and facial recognition systems to provide insights about learners’ behaviors, just to name a few. Continuous research of AI’s use by teachers and schools is important to ensure AI’s positive integration into education systems worldwide is crucial for improved outcomes for all. 

Congress should establish a National Center for AI in Education to build the capacity of education agencies to undertake evidence-based continuous improvement in AI in education. It will increase the body of rigorous research and proven solutions in AI use by teachers and students in education. Teachers will use testing and research to develop guidance for AI in education.

Challenge and Opportunity

It should not fall to one single person, group, industry, or country to decide what role AI’s deep learning should play in education—especially when that utility function will play a major role in creating new learning environments and more equitable opportunities for students. 

Teachers need appropriate professional development on using AI not only so they can implement AI tools in their teaching but also so they can impart those skills and knowledge to their students. Survey research from EdWeek Research Center affirms that teachers, principals, and district leaders view the importance of teaching AI. Most disturbing is the lack of support and guidance around AI that teachers are receiving: 87% of teachers reported receiving zero hours of professional development related to incorporating AI into their work. 

A National Center for AI in Education would transform the current model of how education technology is developed and monitored from a “supply creates the demand system” to a “demand creates the supply” system. Often, education technology resources are developed in isolation from the actual end users, meaning the teachers and students, and this exacerbates inequity. The Center will help to bridge the gap between tech innovators and the classroom, driving innovation and ensuring AI aligns with educational goals.

The collection and use of data in education settings has expanded dramatically in recent decades, thanks to advancements in student information systems, statistical software, and analytic methods, as well as policy frameworks that incentivize evidence generation and use in decision-making. However, this growing body of research all too frequently ignores the effective use of AI in education. The challenges, assets, and context of AI in education vary greatly within states and across the nation. As such, evidence that is generated in real time within school settings should begin to uncover the needs of education related to AI. 

Educators need research, regulation, and policies that are understood in the context of educational settings to effectively inform practice and policy. Students’ preparedness for and transition into college or the workforce is of particular concern, given spatial inequities in the distribution of workforce and higher-education opportunities and the dual imperatives of strengthening student outcomes while ensuring future community vitality. The teaching and use of AI all play into this endeavor.

An analog for this proposal is the National Center for Rural Education Research Networks (NCRERN), an Institute of Education Sciences research and development center that has demonstrated the potential of research networks for generating rigorous, causal evidence in rural settings through multi-site randomized controlled trials. NCRERN’s work leading over 60 rural districts through continuous improvement cycles to improve student postsecondary readiness and facilitate postsecondary transitions generated key insights about how to effectively conduct studies, generate evidence, influence district practice, and improve student outcomes. NCRERN research is used to inform best practices with teachers, counselors, and administrators in school districts, as well as inform and provide guidance for policymaking on state, local, and federal levels.

Another analog is Indiana’s AI-Powered Platform Pilot created by the Indiana Department of Education. The pilot launched during the 2023–2024 school year with 2,500 teachers from 112 schools in 36 school corporations across Indiana using approved AI platforms in their classrooms. More than 45,000 students are impacted by this pilot. A recent survey of teachers in the pilot indicated that 53% rated the overall impact of the AI platform on their students’ learning and their teaching practice as positive or very positive. 

In the pilot, a competitive grant opportunity funds the subscription fees and professional development support for student high dosage tutoring and reducing teacher workload using an AI platform. The vision for this opportunity is to focus on a cohort of teachers and students in the integration of an AI platform. It might be used to support a specific building, grade level, subject area, or student population. Schools are encouraged to focus on student needs in response to academic impact data

Plan of Action

Congress should authorize the establishment of a National Center for AI in Education whose purpose is to research and develop guidance for Congress regarding policy and regulations for the use of AI in educational settings. 

Through a competitive grant process, a university should be chosen to house the Center. This Center should be established within three years of enactment by Congress. The winning institution will be selected and overseen by either the Institute of Education Sciences or another office within the Department of Education. The Department of Education and National Science Foundation will be jointly responsible for supporting professional development along with the Center awardee.

The Center should begin as a pilot with teachers selected from five participating states. These PK-12 teachers will be chosen via a selection process developed by the Center. Selected teachers will have expertise in AI technology and education as evidenced by effective classroom use and academic impact data. Additional criteria could include innovation mindset, willingness to collaborate with knowledge of AI technologies, innovative teaching methods, commitment to professional development, and a passion for improving student learning outcomes. Stakeholders such as students, parents, and policymakers should be involved in the selection process to ensure diverse perspectives are considered. 

The National Center for AI in Education’s duties should include but not be limited to:

Congress should authorize funding for the National Center for AI in Education. Funding should be provided by the federal government to support its research and operations. Plans should be made for a 3–5-year pilot grant as well as a continuation/expansion grant after the first 3–5-year funding cycle. Additional funding may be obtained through grants, donations, and partnerships with private organizations.

Reporting on progress to monitor and evaluate the Center’s pursuits. The National Center for AI in Education would submit an annual report to Congress detailing its research findings, advising and providing regulatory guidance, and impact on education. There will need to be a plan for the National Center for AI in Education to be subject to regular evaluation and oversight to ensure its compliance with legislation and regulations.

To begin this work of the National Center for AI in Education will:

  1. Research and develop courses of action for improvement of AI algorithms to mitigate bias and privacy issues: Regularly reassess AI algorithms used in samples from the Center’s pilot states and school districts and make all necessary adjustments to address those issues.
    1. Incorporate AI technology developers into the feedback loop by establishing partnerships and collaborations. Invite developers to participate in research projects, workshops, and conferences related to AI in education. Research and highlight promising practices in teaching responsible AI use for students:  Teaching about AI is as important, if not more important, as teaching with AI. Therefore, extensive curriculum research should be done for teaching students how to ethically and effective use AI to enhance their learning. Incorporate real-world application of AI into coursework so students are ready to use AI effectively and ethically in the next chapter of their postsecondary journey.
  2. Develop an AI diagnostic toolkit: This toolkit, which should be made publicly available for state agencies and district leaders, will analyze teacher efficacy, students’ grade level mastery, and students’ postsecondary readiness and success. 
  3. Provide professional development for teachers on effective and ethical AI use: Training should include responsible use of generative AI and AI for learning enhancement. 
  4. Monitor systems for bias and discrimination: Test tools to identify unintended bias to ensure that they do not perpetuate gender, racial, or social discrimination. Study and recommend best practices and policies. 
  5. Develop best practices for ensuring privacy: Ensure that student, family, and staff privacy are not compromised by the use of facial recognition or recommender systems. Protect students’ privacy, data security, and informed consent. Research and recommend policies and IT solutions to ensure privacy compliance. 
  6. Curate proven algorithms that protect student and staff autonomy: Predictive systems can limit a person’s ability to act on their own interest and values. The Center will identify and highlight algorithms that are proven to not jeopardize our students or teachers’ self-freedom.

In addition, the National Center for AI in Education will conduct five types of studies: 

  1. Descriptive quantitative studies exploring patterns and predictors of teachers’ and students’ use of AI. Diagnostic studies will draw on district administrative, publicly available, and student survey data. 
  2. Mixed methods case studies describing the context of teachers/schools participating in the Center and how stakeholders within these communities conceptualize students’ postsecondary readiness and success. One case study per pilot state will be used, drawing on survey, focus group, observational, and publicly available data. 
  3. Development evaluations of intervention materials developed by educators and content experts. AI sites/software will be evaluated through district prototyping and user feedback from students and staff. 
  4. Block cluster randomized field trials of at least two AI interventions. The Center will use school-level randomization, blocked on state and other relevant variables, to generate impact estimates on students’ postsecondary readiness and success. The Center will use the ingredients methods to additionally estimate cost-effectiveness estimates. 
  5. Mixed methods implementation studies of at least two AI interventions implemented in real-world conditions. The Center will use intervention artifacts (including notes from participating teachers) as well as surveys, focus groups, and observational data. 

Findings will be disseminated through briefs targeted at a policy and practitioner audience, academic publications, conference presentations, and convenings with district partners. 

A publicly available AI diagnostic toolkit will be developed for state agencies and district leaders to use to analyze teacher efficacy, students on grade level mastery, and students’ postsecondary readiness and success. This toolkit will also serve as a resource for legislators to keep up to date on AI in education. 

Professional development, ongoing coaching, and support to district staff will also be made available to expand capacity for data and evidence use. This multifaceted approach will allow the National Center for AI in Education to expand capacity in research related to AI use in education while having practical impacts on educator practice, district decision-making, and the national field of rural education research. 

Conclusion

The National Center for AI in Education would be valuable for United States education for several reasons. First, it could serve as a hub for research and development in the field, helping to advance our understanding of how AI can be effectively used in educational settings. Second, it could provide resources and support for educators looking to incorporate AI tools into their teaching practices. Third, it could help to inform future policies, as well as standards and best practices for the use of AI education, ensuring that students are receiving high-quality, ethically sound educational experiences. A National Center for AI in Education could help to drive innovation and improvement in the field, ultimately benefiting students and educators alike.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
What is the initial duration of the proposed project?
Three to five years for the pilot, with plans developed for another three-to-five-year continuation expansion.
What is the estimated initial budget request?
$10 million. This figure parallels the funding allocated for the National Center for Rural Education Research Networks (NCRERN), a project of similar scope.
Why should a university house the Center?
Universities have the necessary capabilities to conduct research and to help create and carry out professional development programs. Additionally, this research could inform teacher preparation programs and the data disseminated across teacher preparation programs.
How would this new Center interact with the EdSafeAI alliance or similar coalitions?
The National Center for AI in Education would share research findings widely with all organizations. There could also be opportunities for collaboration.
Would the Center supplant the need for those other coalitions?
No. The Center at its core would be research-based and oriented at street level with teachers and students where the data is created.

Message Incoming: Establish an AI Incident Reporting System

What if an artificial intelligence (AI) lab found their model had a novel dangerous capability? Or a susceptibility to manipulation? Or a security vulnerability? Would they tell the world, confidentially notify the government, or quietly patch it up before release? What if a whistleblower wanted to come forward – where would they go? 

Congress has the opportunity to proactively establish a voluntary national AI Incident Reporting Hub (AIIRH) to identify and share information about AI system failures, accidents, security breaches, and other potentially hazardous incidents with the federal government. This reporting system would be managed by a designated federal agency—likely the National Institute of Standards and Technology (NIST). It would be modeled after successful incident reporting and info-sharing systems operated by the National Cybersecurity FFRDC (funded by the Cybersecurity and Infrastructure Security Agency (CISA)), the Federal Aviation Administration (FAA), and the Food and Drug Administration (FDA). This system would encourage reporting by allowing for confidentiality and guaranteeing only government agencies could access sensitive AI systems specifications.

AIIRH would provide a standardized and systematic way for companies, researchers, civil society, and the public to provide the federal government with key information on AI incidents, enabling analysis and response. It would also provide the public with some access to these data in a reliable way, due to its statutory mandate – albeit often with less granularity than the government will have access to. Nongovernmental and international organizations, including the Responsible AI Collaborative (RAIC) and the Organisation for Economic Co-operation and Development (OECD), already maintain incident reporting systems, cataloging incidents such as facial recognition systems identifying the wrong person for arrest and trading algorithms causing market dislocations. However, these two systems have a number of limitations in their scope and reliability that make them more suitable for public accountability than government use. 

By establishing this system, Congress can enable better identification of critical AI risk areas before widespread harm occurs. This proposal would help both build public trust and, if implemented successfully, would help relevant agencies recognize emerging patterns and take preemptive actions through standards, guidance, notifications, or rulemaking.

Challenge and Opportunity

While AI systems have the potential to produce significant benefits across industries like healthcare, education, environmental protection, finance, and defense, they are also potentially capable of serious harm to individuals and groups. It is crucial that the federal government understand the risks posed by AI systems and develop standards, best practices, and legislation around its use. 

AI risks and harms can take many forms, from representational (such as women CEOs being underrepresented in image searches), to financial (such as automated trading systems or AI agents crashing markets), to possibly existential (such as through the misuse of AI to advance chemical, biological, radiological, and nuclear (CBRN) threats). As these systems become more powerful and interact with more aspects of the physical and digital worlds, a material increase in risk is all but inevitable in the absence of a sensible governance framework. However, in order to craft public policy that maximizes the benefits of AI and ameliorates harms, government agencies and lawmakers must understand the risks these systems pose.

There have been notable efforts by agencies to catalog types of risks, such as NIST’s 2023 AI Risk Management Framework, and to combat the worst of them, such as the Department of Homeland Security’s (DHS) efforts to mitigate AI CBRN threats. However, the U.S. government does not yet have an adequate resource to track and understand specific harmful AI incidents that have occurred or are likely to occur in the real world. While entities like the RAIC and the OECD manage AI incident reporting efforts, these systems primarily collect publicly reported incidents from the media, which are likely a small fraction of the total. These databases serve more as a source of public accountability for developers of problematic systems than a comprehensive repository suitable for government use and analysis. The OECD system lacks a proper taxonomy for different incident types and contexts, and while the RAIC database applies two external taxonomies to their data, it only does so at an aggregated level. Additionally, the OECD and RAIC systems depend on their organizations’ continued support, whereas AIIRH would be statutorily guaranteed. 

The U.S. government should do all it can to facilitate as comprehensive reporting of AI incidents and risks as possible, enabling policymakers to make informed decisions and respond flexibly as the technology develops. As it has done in the cybersecurity space, it is appropriate for the federal government to act as a focal point for collection, analysis, and dissemination of data that is nationally distributed, is multi-sectoral, and has national impacts. Many federal agencies are also equipped to appropriately handle sensitive and valuable data, as is the case with AI system specifications. Compiling this kind of comprehensive dataset would constitute a national public good.

Plan of Action

We propose a framework for a voluntary Artificial Intelligence Incident Reporting Hub, inspired by existing public initiatives in cybersecurity, like the list of Common Vulnerabilities and Exploits (CVE)1 funded by CISA, and in aviation, like the FAA’s confidential Aviation Safety Reporting System (ASRS). 

AIIRH should cover a broad swath of what could be considered an AI incident in order to give agencies maximal data for setting standards, establishing best practices, and exploring future safeguards. Since there is no universally agreed-upon definition of an AI safety “incident,” AIIRH would (at least initially) utilize the OECD definitions of “AI incident” and “AI hazard,” as follows:

With this scope, the system would cover a wide range of confirmed harms and situations likely to cause harm, including dangerous capabilities like CBRN threats. Having an expansive repository of incidents also sets up organizations like NIST to create and iterate on future taxonomies of the space, unifying language for developers, researchers, and civil society. This broad approach does introduce overlap on voluntary cybersecurity incident reporting with the expanded CVE and National Vulnerability Database (NVD) systems proposed by Senators Warner and Tillis in their Secure AI Act. However, the CVE provides no analysis of incidents, so it should be viewed instead as a starting point to be fed into the AIIRH2, and the NVD only applies traditional cybersecurity metrics, whereas the AIIRH could accommodate a much broader holistic analysis.

Reporting submitted to AIIRH should highlight key issues, including whether the incident occurred organically or as the result of intentional misuse. Details of harm either caused or deemed plausible should also be provided. Importantly, reporting forms should allow maximum information but require as little as possible in order to encourage industry reporting without fear of leaking sensitive information and lower the implied transaction costs of reporting. While as much data on these incidents as possible should be broadly shared to build public trust, there should be guarantees that any confidential information and sensitive system details shared remain secure. Contributors should also have the option to reveal their identity only to AIIRH staff and otherwise maintain anonymity.

NIST is the natural candidate to function as the reporting agency, as it has taken a larger role in AI standards setting since the release of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. NIST also has experience with incident reporting through their NVD, which contains agency experts’ analysis of CVE incidents. Finally, similar to how the National Aeronautics and Space Administration (NASA) operates the FAA’s confidential reporting system, ASRS, as a neutral third party, NIST is a non-enforcing agency with excellent industry relationships due to its collaborations on standards and practices. CISA is another option, as it funds and manages several incident reporting systems, including over AI security if the Warner-Tillis bill passes, but there is no reason to believe CISA has the expertise to address harms caused by things like algorithmic discrimination or CBRN threats. 

While NIST might be a trusted party to maintain a confidential system, employees reporting credible threats to AIIRH should have additional guarantees against retaliation from their current/former employers in the form of whistleblower protections. These are particularly relevant in light of reports that OpenAI, an AI industry leader, is allegedly neglecting safety and preventing employee disclosure through restrictive nondisparagement agreements. A potential model could be whistleblower protections introduced in California SB1047, where employers are forbidden from preventing, or retaliating based upon, the disclosure of an AI incident to an appropriate government agent. 

In order to further incentivize reporting, contributors may be granted advanced, real-time, or more complete access to the AIIRH reporting data. While the goal is to encourage the active exchange of threat vectors, in acknowledgment of the aforementioned confidentiality issues, reporters could opt out from having their data shared in this way, forgoing their own advanced access. If they allow a redacted version of their incident to be shared anonymously with other contributors, they could still maintain access to the reporting data.

Key stakeholders include: 

Related proposed bills include:

The proposal is likely to require congressional action to appropriate funds for the creation and implementation of the AIIRH. It would require an estimated $10–25 million annually to create and maintain AIIRH, pay-for to be determined.3

Conclusion

An AI Incident Reporting System would enable informed policymaking as the risks of AI continue to develop. By allowing organizations to report information on serious risks that their systems may pose in areas like CBRN, illegal discrimination, and cyber threats, this proposal would enable the U.S. government to collect and analyze high-quality data and, if needed, promulgate standards to prevent the proliferation of dangerous capabilities to non-state actors. By incentivizing voluntary reporting, we can preserve innovative and high-value uses of AI for society and the economy, while staying up-to-date with the quickly evolving frontier in cases where regulatory oversight is paramount.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why house AIIRH at NIST?

NIST has institutional expertise with incident reporting, having maintained the National Vulnerability Database and Disaster Data Portal. NIST’s role as a standard-setting body leaves it ideally placed to keep pace with developments in new areas of technology. This role as a standard-setting body that frequently collaborates with companies, while not regulating them, allows them to act as a trusted home for cross-industry collaboration on sensitive issues. In the Biden Administration’s Executive Order on AI, NIST was given authority over establishing testbeds and guidance for testing and red-teaming of AI systems, making it a natural home for the closely-related work here.

What kinds of follow-up, if any, will be conducted after an initial incident report?

AIIRH staff shall be empowered to conduct follow-ups on credible threat reports, and to share information with Department of Commerce, Department of Homeland Security, Department of Defense, and other agency leadership on those reports.

What could come next after these reports?

AIIRH staff could work with others at NIST to build a taxonomy of AI incidents, which would provide a helpful shared language for standards and regulations. Additionally, staff might share incidents as relevant with interested offices like CISA, Department of Justice, and the Federal Trade Commission, although steps should be taken to minimize retribution against organizations who voluntarily disclosed incidents (in contrast to whistleblower cases).

Why would organizations use a voluntary reporting system?

Similar to the logic of companies disclosing cybersecurity vulnerabilities and incidents, voluntary reporting builds public trust, earns companies favor with enforcement agencies, and increases safety broadly across the community. The confidentiality guarantees provided by AIIRH should make the prospect more appealing as well. Separately, individuals at organizations like OpenAI and Google have demonstrated a propensity towards disclosure through whistleblower complaints when they believe their employers are acting unsafely.

Addressing the Disproportionate Impacts of Student Online Activity Monitoring Software on Students with Disabilities

Student activity monitoring software is widely used in K-12 schools and has been employed in response to address student mental health needs. Education technology companies have developed algorithms using artificial intelligence (AI) that seek to detect risk for harm or self-harm by monitoring students’ online activities. This type of software can track student logins, view the contents of a student’s screen in real time, monitor or flag web search history, or close browser tabs for off-task students. While teachers, parents, and students largely report the benefits of student activity monitoring outweigh the risks, there is still a need to address the ways that student privacy might be compromised and to avoid perpetuating existing inequities, especially for students with disabilities. 

To address these issues, Congress and federal agencies should:

Challenge and Opportunity

People with disabilities have long benefited from technological advances. For decades, assistive technology, ranging from low tech to high tech, has helped students with disabilities with learning. AI tools hold promise for making lessons more accessible. A recent survey conducted by EdWeek of principals and district leaders showed that most schools are considering using AI, actively exploring their use, or are piloting them. The special education research community at large, such as those at the Center for Innovation, Design and Digital Learning (CIDDL) view the immense potential and risks of AI in educating students for disabilities. CIDDL states:

“AI in education has the potential to revolutionize teaching and learning through personalized education, administrative efficiency, and innovation, particularly benefiting (special) education programs across both K-12 and Higher Education. Key impacts include ethical issues, privacy, bias, and the readiness of students and faculty for AI integration.”

At the same time, AI-based student online activity monitoring software is being employed more universally to monitor and surveil what students are doing online. In K-12 schools, AI-based student activity monitoring software is widespread – nearly 9 in 10 teachers say that their school monitors students’ online activities. 

Schools have employed these technologies to attempt to address student mental health needs, such as referring flagged students to counseling or other services. These practices have significant implications for students with disabilities, as they are at higher risk for mental health issues. In 2024, NCLD surveyed 1349 young adults ages 18 to 24 and found that nearly 15% of individuals with a learning disability had a mental health diagnosis and 45% of respondents indicated that having a learning disability negatively impacts their mental health. Knowing these risks for this population, careful attention must be paid to ensure mental health needs are being identified and appropriately addressed through evidence-based supports. 

Yet there is little evidence supporting the efficacy of this software. Researchers at RAND, through review of peer-reviewed and gray literature as well as interviews, raise issues with the software, including threats to student privacy, the challenge of families in opting out, algorithmic bias, and escalation of situations to law enforcement. The Center for Democracy & Technology (CDT) conducted research highlighting that students with disabilities are disproportionately impacted by these AI technologies. For example, licensed special education teachers are more likely to report knowing students who have gotten in trouble and been contacted by law enforcement due to student activity monitoring. Other CDT polling found that 61% of students with learning disabilities report that they do not share their true thoughts or ideas online because of monitoring. 

We also know that students with disabilities are almost three times more likely to be arrested than their nondisabled peers, with Black and Latino male students with disabilities being the most at risk of arrest. Interactions with law enforcement, especially for students with disabilities, can be detrimental to health and education. Because people with disabilities have protections under civil rights laws, including the right to a free appropriate public education in school, actions must be taken. 

Parents are also increasingly concerned about subjecting their children to greater monitoring both in and outside the classroom, leading to decreased support for the practice: 71% of parents report being concerned with schools tracking their children’s location and 66% are concerned with their children’s data being shared with law enforcement (including 78% of Black parents). Concern about student data privacy and security is higher among parents of children with disabilities (79% vs. 69%). Between the 2021–2022 and 2022–2023 school years, parent and student support of student activity monitoring fell 8% and 11%, respectively. 

Plan of Action

Recommendation 1. Improve data collection.

While data collected from private research entities like RAND and CDT captures some important information on this issue, the federal government should collect such relevant data to capture the extent to which these technologies might be misused. Polling data, like the CDT survey of 2000 teachers referenced above, provides a snapshot and is influential research to raise immediate concerns around the procurement of student activity monitoring software. However, the federal government is currently not collecting larger-scale data about this issue and members of Congress, such as Senators Markey and Warren, have relied on CDT’s data in their investigation of the issue because of the absence of federal datasets.

To do this, Congress should charge the National Center for Education Statistics (NCES) within the Institute of Education Sciences (IES) with collecting large-scale data from local education agencies to examine the impact of digital learning tools, including student activity monitoring software. IES should collect data on students disaggregated the student subgroups described in section 1111(b)(2)(B)(xi) of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 6311(b)(2)(B)(xi)) and disseminate such findings to state education agencies and local educational agencies and other appropriate entities. 

Recommendation 2. Enhance parental notification and ensure free appropriate publication education.

Families and communities are not being appropriately informed about the use, or potential for misuse, of technologies installed on school-issued devices and accounts. At the start of the school year, schools should notify parents about what technologies are used, how and why they are used, and alert them of any potential risks associated with them. 

Congress should require school districts to notify parents annually, as they do with other Title I programs as described in Sec. 1116 of the Elementary and Secondary Education Act of 1965 (20 U.S.C. 6318), including “notifying parents of the policy in an understandable and uniform format and, to the extent practicable, provided in a language the parents can understand” and that “such policy shall be made available to the local community and updated periodically to meet the changing needs of parents and the school.”

For students with disabilities specifically, the Individuals with Disabilities Education Act (IDEA) provides procedural safeguards to parents to ensure they have certain rights and protections so that their child receives a free appropriate public education (FAPE). To implement IDEA, schools must convene an Individualized Education Program (IEP) team, and the IEP should outline the academic and/or behavioral supports and services the child will receive in school and include a statement of the child’s present levels of academic achievement and functional performance, including how the child’s disability affects the child’s involvement and progress in the general education curriculum. The U.S. Department of Education should provide guidance about how to leverage the current IEP process to notify parents of the technologies in place in the curriculum and use the IEP development process as a mechanism to identify which mental health supports and services a student might need, rather than relying on conclusions from data produced by the software. 

In addition, IDEA regulations address instances of significant disproportionality of children with disabilities who are students of color, including in disciplinary referrals and exclusionary discipline (which may include referral to law enforcement). Because of this long history of disproportionate disciplinary actions and the fact that special educators are more likely to report knowing students who have gotten in trouble and been contacted by law enforcement due to student activity monitoring, it raises questions about whether these incidents are a loss of instructional time for students with disabilities and, in turn, a potential violation of FAPE. The Department of Education should provide guidance to clarify that such disproportionate discipline might result from the employment of student activity monitoring software and how to mitigate referrals to law enforcement for students of disabilities. 

Recommendation 3. Invest in the Office for Civil Rights within the U.S. Department of Education.

The Office for Civil Rights (OCR) currently receives $140 million and is responsible for investigating and resolving civil rights complaints in education, including allegations of discrimination based on disability status. FY2023 saw a continued increase in complaints filed with OCR, at 19,201 complaints received. The total number of complaints has almost tripled since FY2009, and during this same period OCR’s number of full-time equivalent staff decreased by about 10%. Typically, the majority of complaints received have raised allegations regarding disability.

Congress should double its appropriations for OCR, raising it $280 million. A robust investment would give OCR the resources to address complaints alleging discrimination that involve  an educational technology software, program, or service, including AI-driven technologies. With greater resources, OCR can initiate greater enforcement efforts against potential violations of civil rights law and work with the Office of Education Technology to provide guidance to schools on how to fulfill civil rights obligations. 

Recommendation 4. Support state and local education agencies with technical assistance.

State education agencies (SEAs) and local education agencies (LEAs) are facing enormous challenges to respond to the market of rapidly changing education technologies available. States and districts are inundated with products to select from vendors and often do not have the technical expertise to differentiate between products. When education technology initiatives and products are not conceived, designed, procured, implemented, or evaluated with the needs of all students in mind, technology can exacerbate existing inequalities. 

To support states and school districts in procuring, implementing, and developing state and local policy, the federal government should invest in a national center to provide robust technical assistance focused on safe and equitable adoption of schoolwide AI technologies, including student online activity monitoring software. 

Conclusion

AI technologies will have an enormous impact on public education. Yet, if we do not implement these technologies with students with disabilities in mind, we are at risk for furthering the marginalization of students with disabilities. Both Congress and the U.S. Department of Education can play an important role in taking the necessary steps in developing both policy and guidance, and providing the resources to combat the harms posed by these technologies. NCLD looks forward to working with decision makers to take action to protect students with disabilities’ civil rights and ensure responsible use of AI technologies in schools.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
Why is the Institute of Education Sciences (IES) the right entity to collect such data?
The IES has invested in research to advance AI technologies used in education and coordinated with the National Science Foundation to advance AI-driven research and innovations for learners with or at risk for disabilities, demonstrating a clear commitment to investing in experimental studies that incorporate AI into instruction and piloting new technologies. While this research is important and will help shape the future of teaching and learning, especially for disabled students, additional data and research is needed to fully evaluate the extent to which AI tools already used in schools are impacting students.
What would be the focus of the proposed technical assistance (TA) center?

This TA center could provide guidance to states and local education agencies that lack both the capacity and the subject matter expertise in both the procurement and implementation process. It can coordinate its services and resources with existing TA centers like the T4PA Center or Regional Educational Laboratories, on how to invest in evidence-based mental health supports in schools and communities, including using technology in ways that mitigate discrimination and bias.


As of February 2024, seven states had published AI guidelines (reviewed and collated by Digital Promise). While these broadly recognize the need for policies and guidelines to ensure that AI is used safely and ethically, none explicitly mention the use of student activity monitoring AI software.

Why should the Office of Civil Rights (OCR) be funded at a level of at least $280 million?

This is a funding level requested in other bills seeking to increase OCR’s capacity such as the Showing Up For Students Act. OCR is projecting 23,879 complaint receipts in FY2025. Excluding projected complaints filed by a single complainant, this number is expected to be 22,179 cases. Without staffing increases in FY2025, the average caseload per investigative staff will become unmanageable at 71 cases per staff (22,179 projected cases divided by 313 investigative staff).

How does this proposal fit into the larger landscape of congressional and administrative attention to this issue?

In late 2023, the Biden-Harris Administration issued an Executive Order on AI. Also that fall, Senate Health, Education, Labor, and Pensions (HELP) Committee Ranking Member Bill Cassidy (R-LA) released a White Paper on AI and requested stakeholder feedback on the impact of AI and the issues within his committee’s jurisdiction.


U.S. House of Representatives members Lori Trahan (D-MA) and Sara Jacobs (D-CA), among others, also recently asked Secretary of Education Miguel Cardona to provide information on the OCR’s understanding of the impacts of educational technology and artificial intelligence in the classroom.


Last, Senate Majority Leader Chuck Schumer (D-NY) and Senator Todd Young (R-IN) issued a bipartisan Roadmap for Artificial Intelligence Policy that calls for $32 billion annual investment in research on AI. While K-12 education has not been a core focal point within ongoing legislative and administrative actions on AI, it is imperative that the federal government take the necessary steps to protect all students and play an active role in upholding federal civil rights and privacy laws that protect students with disabilities. Given these commitments from the federal government, there is a ripe opportunity to take action to address the issues of student privacy and discrimination that these technologies pose.

What existing laws should policymakers consider improving the implementation of and/or work to uphold existing statutory protections?

Individuals with Disabilities Education Act (IDEA): IDEA is the law that ensures students with disabilities receive a free appropriate public education (FAPE). IDEA regulations require states to collect data and examine whether significant disproportionality based on race and ethnicity is occurring with respect to the incidence, duration, and type of disciplinary action, including suspensions and expulsions. Guidance from the Department of Education in 2022 emphasized that schools are required to provide behavioral supports and services to students who need them in order to ensure FAPE. It also stated that “a school policy or practice that is neutral on its face may still have the unjustified discriminatory effect of denying a student with a disability meaningful access to the school’s aid, benefits, or services, or of excluding them based on disability, even if the discrimination is unintentional.”


Section 504 of the Rehabilitation Act: This civil rights statute protects individuals from discrimination based on their disability. Any school that receives federal funds must abide by Section 504, and some students who are not eligible for services under IDEA may still be protected under this law (these students usually have a “504 plan”). As the Department of Education works to update the regulations for Section 504, the implications of surveillance software on the civil rights of students with disabilities should be considered.


Elementary and Secondary Education Act (ESEA) Title I and Title IV-A: Title I of the Elementary and Secondary Education Act (ESEA) provides funding to public schools and requires states and public school systems to hold public schools accountable for monitoring and improving achievement outcomes for students and closing achievement gaps between subgroups like students with disabilities. One requirement under Title I is to notify parents of certain policies the school has and actions the school will take throughout the year. As a part of this process, schools should notify families of any school monitoring policies that may be used for disciplinary actions. The Title IV-A program within ESEA provides funding to states (95% of which must be allocated to districts) to improve academic achievement in three priority content areas, including activities to support the effective use of technology. This may include professional development and learning for educators around educational technology, building technology capacity and infrastructure, and more.


Family Educational Rights and Privacy Act (FERPA): FERPA protects the privacy of students’ educational records (such as grades and transcripts) by preventing schools or teachers from disclosing students’ records while allowing caregivers access to those records to review or correct them. However, the information from computer activity on school-issued devices or accounts is not usually considered an education record and is thus not subject to FERPA’s protections.


Children’s Online Privacy Protection Act (COPPA): COPPA requires operators of commercial websites, online services, and mobile apps to notify parents and obtain their consent before collecting any personal information on children under the age of 13. The aim is to give parents more control over what information is collected from their children online. The law regulates companies, not schools.

About the National Center for Learning Disabilities

We are working to improve the lives of individuals with learning disabilities and attention issues—by empowering parents and young adults, transforming schools, and advocating for equal rights and opportunities. We actively work to shape local and national policy to reduce barriers and ensure equitable opportunities and accessibility for students with learning disabilities and attention issues. Visit ncld.org to learn more.

Establish Data-Sharing Standards for the Development of AI Models in Healthcare

The National Institute for Standards and Technology (NIST) should lead an interagency coalition to produce standards that enable third-party research and development on healthcare data. These standards, governing data anonymization, sharing, and use, have the potential to dramatically expedite development and adoption of medical AI technologies across the healthcare sector.

Challenge and Opportunity

The rise of large language models (LLMs) has demonstrated the predictive power and nuanced understanding that comes from large datasets. Recent work in multimodal learning and natural language understanding have made complex problems—for example, predicting patient treatment pathways from unstructured health records—feasible. A study by Harvard estimated that the wider adoption of AI automation would reduce U.S. healthcare spending by $200 billion to $360 billion annually and reduce the spend of public payers, such as Medicare, Medicaid, and the VA, by five to seven percent, across both administrative and medical costs.

However, the practice of healthcare, while information-rich, is incredibly data-poor. There is not nearly enough medical data available for large-scale learning, particularly when focusing on the continuum of care. We generate terabytes of medical data daily, but this data is fragmented and hidden, held captive by lack of interoperability.

Currently, privacy concerns and legacy data infrastructure create significant friction for researchers working to develop medical AI. Each research project must build custom infrastructure to access data from each and every healthcare system. Even absent infrastructural issues, hospitals and health systems face liability risks by sharing data; there are no clear guidelines for sufficiently deidentifying data to enable safe use by third parties.

There is an urgent need for federal action to unlock data for AI development in healthcare. AI models trained on larger and more diverse datasets improve substantially in accuracy, safety, and generalizability. These tools can transform medical diagnosis, treatment planning, drug development, and health systems management.

New NIST standards governing the anonymization, secure transfer, and approved use of healthcare data could spur collaboration. AI companies, startups, academics, and others could responsibly access large datasets to train more advanced models.

Other nations are already creating such data-sharing frameworks, and the United States risks falling behind. The United Kingdom has facilitated a significant volume of public-private collaborations through its establishment of Trusted Research Environments. Australia has a similar offering in its SURE (Secure Unified Research Environment). Finland has the Finnish Social and Health Data Permit Authority (Findata), which houses and grants access to a centralized repository of health data. But the United States lacks a single federally sponsored protocol and research sandbox. Instead, we have a hodgepodge of offerings, ranging from the federal National COVID Cohort Collaborative Data Enclave to private initiatives like the ENACT Network.

Without federal guidance, many providers will remain reticent to participate or will provide data in haphazard ways. Researchers and AI companies will lack the data required to push boundaries. By defining clear technical and governance standards for third-party data sharing, NIST, in collaboration with other government agencies, can drive transformative impact in healthcare.

Plan of Action

The effort to establish this set of guidelines will be structurally similar to previous standard-setting projects by NIST, such as the Cryptographic Standards or Biometric Standards Program. Using those programs as examples, we expect the effort to require around 24 months and $5 million in funding. 

Assemble a Task Force

This standards initiative could be established under NIST’s Information Technology Laboratory, which has expertise in creating data standards. However, in order to gather domain knowledge, partnerships with agencies like the Office of the National Coordinator for Health Information Technology (ONCHIT), Department of Health and Human Services (HHS), the National Institutes of Health (NIH), the Centers for Medicare & Medicaid Services (CMS), and the Agency for Healthcare Research and Quality (AHRQ) would be invaluable.

Draft the Standards

Data sharing would require standards at three levels: 

Syntactic regulations already exist through standards like HL7/FHIR. Semantic formats exist as well, in standards like the Observational Medical Outcomes Partnership’s Common Data Model. We propose to develop the final class of standards, governing fair, privacy-preserving, and effective use.

The governance standards could cover:

  1. Data Anonymization
  1. Secure Data Transfer Protocols
  1. Approved Usage
  1. Public-Private Coordination

Revise with Public Comment

After releasing the first draft of standards, seek input from stakeholders and the public. In particular, these groups are likely to have constructive input: 

Implement and Incentivize

After publishing the final standards, the task force should promote their adoption and incentivize public-private partnerships. The HHS Office of Civil Rights must issue regulatory guidance allowable under HIPAA to allow these guide documents to be used as a means to meet regulatory burden. These standards could be initially adopted by public health data sources, such as CMS, or NIH grants may mandate participation as part of recently launched public disclosure and data sharing requirements.

Conclusion

Developing standards for collaboration on health AI is essential for the next generation of healthcare technologies.

All the pieces are already in place. The HITECH Act and the Office of the National Coordinator for Health Information Technology gives grants to Regional Health Information Exchanges precisely to enable this exchange. This effort directly aligns with the administration’s priority of leveraging AI and data for the national good and the White House’s recent statement on advancing healthcare AI. Collaborative protocols like these also move us toward the vision of an interoperable health system—and better outcomes for all Americans.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How can we maintain patient privacy when sharing data with third parties?
Sharing data with third parties is not new. Researchers and companies often engage in data-sharing agreements with medical centers or payors. However, these agreements are usually specialized and created ad hoc. This new regulation aims to standardize and scale such data-sharing agreements while still protecting patient privacy. Existing standards, such as HIPAA, may be combined with emerging technologies, like homomorphic encryption, differential privacy, or secure multi-party computation, to spur innovation without sacrificing patient privacy.
Why is NIST the right body for this work, rather than a group like HHS, ONCHIT, or CMS?

Collaboration among several agencies is essential to the design and implementation of these standards. We envision NIST working closely with counterparts at HHS and other agencies. However, we think that NIST is the best agency to lead this coalition due to its rich technical expertise in emerging technologies.


NIST has been responsible for several landmark technical standards, such as the NIST Cloud Computing Reference Architecture, and has previously done related work in its report on deidentification of personal information and extensive work on assisting adoption of the HL7 data interoperability standard.


NIST has the necessary expertise for drafting and developing data anonymization and exchange protocols and, in collaboration with the HHS, ONCHIT, NIH, AHRQ, and industry stakeholders, will have the domain knowledge to create useful and practical standards.

How does this differ from HL7?
HL7 and FHIR are data exchange protocols for healthcare information, maintained by the nonprofit HL7 International. Both HL7 and FHIR play critical roles in enabling interoperability across the healthcare ecosystem. However, they primarily govern data formats and exchange protocols between systems, rather than specifying standards around data anonymization and responsible sharing with third-parties like AI developers.

Establish a Teacher AI Literacy Development Program

The rapid advancement of artificial intelligence (AI) technology necessitates a transformation in our educational systems to equip the future workforce with necessary AI skills, starting with our K-12 ecosystem. Congress should establish a dedicated program within the National Science Foundation (NSF) to provide ongoing AI literacy training specifically for K-12 teachers and pre-service teachers. The proposed program would ensure that all teachers have the necessary knowledge and skills to integrate AI into their teaching practices effectively.

Challenge and Opportunity

Generative artificial intelligence (GenAI) has emerged as a profoundly disruptive force reshaping the landscape of nearly every industry. This seismic shift demands a corresponding transformation in our educational systems to prepare the next generation effectively. Central to this transformation is building a robust GenAI literacy among students, which begins with equipping our educators. Currently, the integration of GenAI technologies in classrooms is outpacing the preparedness of our teachers, with less than 20% feeling adequately equipped to utilize AI tools such as ChatGPT. Moreover, only 29% have received professional development in relevant technologies, and only 14 states offer any guidance on GenAI implementation in educational settings at the time of this writing.

The urgency for federal intervention cannot be overstated. Without it, there is a significant risk of exacerbating educational and technological disparities among students, which could hinder their readiness for future job markets dominated by AI. It is of particular importance that AI literacy training is deployed equitably to counter the disproportionate impact of AI and automation on women and people of color. McKinsey Global Institute reported in 2023 that women are 1.5 times more likely than men to experience job displacement by 2030 as a result of AI and automation. A previous study by McKinsey found that Black and Hispanic/Latino workers are at higher risk of occupational displacement than any other racial demographic. This proposal seeks to address the critical deficit in AI literacy among teachers, which, if unaddressed, will leave our students ill-prepared for an AI-driven world.

The opportunity before us is to establish a government program that will empower teachers to stay relevant and adaptable in an evolving educational landscape. This will not only enhance their professional development but also ensure they can provide high-quality education to their students. Teachers equipped with AI literacy skills will be better prepared to educate students on the importance and applications of AI. This will help students develop critical skills needed for future careers, fostering a workforce that is ready to meet the demands of an AI-driven economy. 

Plan of Action

To establish the NSF Teacher AI Literacy Development Program, Congress should first pass a defining piece of legislation that will outline the program’s purpose, delineate its extent, and allocate necessary funding. 

An initial funding allocation, as specified by the authorizing legislation, will be directed toward establishing the program’s operations. This funding will cover essential aspects such as staffing, the initial setup of the professional development resource hub, and the development of incentive programs for states. 

Key responsibilities of the program include:

Develop comprehensive AI literacy standards for K-12 teachers through a collaborative process involving educational experts, AI specialists, and teachers. These standards could be developed directly by the federal government as a model for states to consider adopting or compiled from existing resources set by reputable organizations, such as the International Society for Technology in Education (ISTE) or UNESCO

Compile a centralized digital repository of AI literacy resources, including training materials, instructional guides, best practices, and case studies. These resources will be curated from leading educational institutions, AI research organizations, and technology companies. The program would establish partnerships with universities, education technology companies, and nonprofits to continuously update and expand the resource hub with the latest tools and research findings.

Design a comprehensive grant program to support the development and implementation of AI literacy programs for both in-service and pre-service teachers. The program would outline the criteria for eligibility, application processes, and evaluation metrics to ensure that funds are distributed effectively and equitably. It would also provide funding to educational institutions to build their capacity for delivering high-quality AI literacy programs. This includes supporting the development of infrastructure, acquiring necessary technology, and hiring or training faculty with expertise in AI.

Conduct regular, comprehensive assessments to gauge the current state of AI literacy among educators. These assessments would include surveys, interviews, and observational studies to gather qualitative and quantitative data on teachers’ knowledge, skills, and confidence in using AI in their classrooms across diverse educational settings. This data would then be used to address specific gaps and areas of need.

Conduct nationwide campaigns to raise awareness about the importance of AI literacy in education, prioritizing outreach efforts in underserved and rural areas to ensure that these communities receive the necessary information and resources. This can include localized campaigns, community meetings, and partnerships with local organizations.

Prepare and present annual reports to Congress and the public detailing the program’s achievements, challenges, and future plans. This ensures transparency and accountability in the program’s implementation and progress.

Regularly evaluate the effectiveness of AI literacy programs and assess their impact on teaching practices and student outcomes. Use this data to inform policy decisions and program improvements.

Proposed Timeline

TimeframeGoals
Year 1: Formation and Setup
Quarter 1Congress passes legislation to establish the program.
Allocate initial funding to support the establishment and initial operations of the program.
Quarter 2Formally establish the program’s administrative office and hire key staff.
Develop and launch the program’s official website for public communication and resource dissemination.
Quarter 3Initiate a national needs assessment to determine the current state of AI literacy among educators.
Develop AI literacy standards for K-12 teachers.
Quarter 4Establish AI literacy resource centers within community college and vocational school Centers of AI Excellence.
Distribute resources and funding to selected pilot school districts and teacher training institutions.
Year 2: Implementation and Expansion
Quarter 1Evaluate pilot programs and integrate initial feedback to refine training materials and strategies.
Expand resource distribution based on feedback from pilot programs.
Quarter 2Launch strategic partnerships with leading technology firms, academic institutions, and educational nonprofits to enhance resource hubs and professional development opportunities.
Initiate public awareness campaigns to emphasize the importance of AI literacy in education.
Quarter 3Offer incentives for states to develop and implement AI literacy training programs for teachers.
Continue to develop and refine AI literacy standards based on ongoing feedback and advancements in AI technology.
Quarter 4Review year-end progress and adjust strategies based on comprehensive evaluations.
Prepare the first annual report to Congress and the public outlining achievements, challenges, and future plans.
Year 3 and Beyond: Maturation and Nationwide Implementation
Scale up successful initiatives to a national level based on proven effectiveness and feedback.
Continuously update the Professional Development Resource Hub with the latest AI educational tools and best practices.
Regularly update AI literacy standards to reflect technological advancements and educational needs.
Sustain focus on incentivizing states and expanding reach to underserved regions to ensure equitable AI education across all demographics.

Conclusion

This proposal expands upon Section D of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, emphasizing the importance of building AI literacy to foster a deeper understanding before providing tools and resources. Additionally, this policy has been developed with reference to the Office of Educational Technology’s report on Artificial Intelligence and the Future of Teaching and Learning, as well as the 2024 National Education Technology Plan. These references underscore the critical need for comprehensive AI education and align with national strategies for integrating advanced technologies in education. 

We stand at a pivotal moment where our actions today will determine our students’ readiness for the world of tomorrow. Therefore, it is imperative for Congress to act swiftly to pass the necessary legislation to establish the NSF Teacher AI Literacy Development Program. Doing so will not only secure America’s technological leadership but also ensure that every student has the opportunity to succeed in the new digital age.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.

Frequently Asked Questions
How can we ensure that the AI literacy training is not biased or does not promote certain agendas, especially given the potential influence of technology companies involved in developing resources?

The program emphasizes developing AI literacy standards through a collaborative process involving educational experts, AI specialists, and teachers themselves. By including diverse perspectives and stakeholders, the goal is to create comprehensive and balanced training materials. Additionally, resources will be curated from a wide range of leading institutions, organizations, and companies to prevent any single entity from exerting undue influence. Regular evaluations and feedback loops will also help identify and address any potential biases.

How will this program address the digital divide and ensure equitable access to AI literacy training for teachers in underfunded schools and rural areas? Many districts may lack the necessary infrastructure and resources.

Ensuring equitable access to AI literacy training is a key priority of this program. The nationwide awareness campaigns will prioritize outreach efforts in underserved and rural areas. Additionally, the program will offer incentives and targeted funding for states to develop and implement AI literacy training programs, with a focus on supporting schools and districts with limited resources.

Given the rapid pace of AI advancements, how frequently will the training materials and resources need to be updated, and what is the long-term cost projection for keeping the program relevant?

The program acknowledges the need for continuous updating of AI literacy standards, training materials, and resources to reflect the latest advancements in AI technology. The proposal outlines plans for regular updates to the Professional Development Resource Hub, as well as periodic revisions to the AI literacy standards themselves. While specific timelines and cost projections are not provided, the program is designed with a long-term view, including strategic partnerships with leading institutions and technology firms to stay current with developments in the field. Annual reports to Congress will help assess the program’s effectiveness and inform decisions about future funding and resource allocation.

What metrics will be used to evaluate the effectiveness of the AI literacy training programs, and how will student outcomes be measured to justify the investment in this initiative?

The program emphasizes the importance of regular, comprehensive assessments to gauge the current state of AI literacy among educators. These assessments will include surveys, interviews, and observational studies to gather both qualitative and quantitative data on teachers’ knowledge, skills, and confidence in using AI in their classrooms across diverse educational settings. Additionally, the program aims to evaluate the effectiveness of AI literacy programs and assess their impact on teaching practices and student outcomes, though specific metrics are not outlined. The data gathered through these evaluations will be used to inform policy decisions, program improvements, and to justify continued investment in the initiative.

A NIST Foundation to Support the Agency’s AI Mandate

The National Institute of Standards and Technology (NIST) faces several obstacles to advancing its mission on artificial intelligence (AI) at a time when the field is rapidly advancing and consequences for falling short are wide-reaching. To enable NIST to quickly and effectively respond, Congress should authorize the establishment of a NIST Foundation to unlock additional resources, expertise, flexible funding mechanisms, and innovation, while ensuring the foundation is stood up with strong ethics and oversight mechanisms.

Challenge

The rapid advancement of AI presents unprecedented opportunities and complex challenges as it is increasingly integrated into the way that we work and live. The National Institute of Standards and Technology (NIST), an agency within the Department of Commerce, plays an important role in advancing AI-related research, measurement, evaluation, and technical standard setting. NIST has recently been given responsibilities under President Biden’s October 30, 2023, Executive Order (EO) on Safe, Security, and Trustworthy Artificial Intelligence. To support the implementation of the EO, NIST launched an AI Safety Institute (AISI), created an AI Safety Institute Consortium (AISIC), and released a strategic vision for AISI focused on safe and responsible AI innovation, among other actions.

While work is underway to implement Biden’s AI EO and deliver on NIST’s broader AI mandate, NIST faces persistent obstacles in its ability to quickly and effectively respond. For example, recent legislation like the Fiscal Responsibility Act of 2023 has set discretionary spending limits for FY26 through FY29, which means less funding is available to support NIST’s programs. Even before this, NIST’s funding has remained at a fractional level (around $1–1.3 billion each year) of the industries it is supposed to set standards for. Since FY22, NIST has received lower appropriations than it has requested.

In addition, NIST is struggling to attract the specialized science and technology (S&T) talent that it needs due to competition for technical talent, a lack of competitive pay compared to the private sector, a gender-imbalanced culture, and issues with transferring institutional knowledge when individuals transition out of the agency, according to a February 2023 Government Accountability Office report. Alongside this, NIST has limitations on how it can work with the private sector and is subject to procurement processes that can be a barrier to innovation, an issue the agency has struggled with in years past, according to a September 2005 Inspector General report.

The consequences of NIST not fulfilling its mandate on AI due to these challenges and limitations are wide-reaching: a lack of uniform AI standards across platforms and countries; reduced AI trust and security; limitations on AI innovation and commercialization; and the United States losing its place as a leading international voice on AI standards and governance, giving the Chinese government and companies a competitive edge as they seek to become a world leader in artificial intelligence.

Opportunity

An agency-related foundation could play a crucial role in addressing these challenges and strengthening NIST’s AI mission. Agency-related nonprofit research foundations and corporations have long been used to support the research and development (R&D) mandates of federal agencies by enabling them to quickly respond to challenges and leverage additional resources, expertise, flexible funding mechanisms, and innovation from the private sector to support service delivery and the achievement of agency programmatic goals more efficiently and effectively.

One example is the CDC Foundation. In 1992, Congress passed legislation authorizing the creation of the CDC Foundation, an independent, 501(c)(3) public charity that supports the mandate of the Centers for Disease Control and Prevention (CDC) by facilitating strategic partnerships between the CDC and the philanthropic community and leveraging private-sector funds from individuals, philanthropies, and corporations. The CDC is legally able to capitalize on these private sector funds through two mechanisms: (1) Section 231 of the Public Health Service Act, which authorizes the Secretary of Health and Human Services “to accept on behalf of the United States gifts made unconditionally by will or otherwise for the benefit of the Service or for the carrying out of any of its functions,” and (2) the legislation that authorized the creation of the CDC Foundation, which establishes its governance structure and provides the CDC director the authority to accept funds and voluntary services from the foundation to aid and facilitate the CDC’s work. 

Since 1995, the CDC Foundation has raised $2.2 billion to support 1,400 public health programs in the United States and worldwide. The importance of this model was evident at the height of the COVID-19 pandemic when the CDC Foundation supported the Centers by quickly raising  to deploy various resources supporting communities. In the same way that the CDC Foundation bolstered the CDC’s work during the greatest public health challenge in 100 years, a foundation model could be critical in helping an agency like NIST deploy private, philanthropic funds from an independent source to quickly respond to the challenge and opportunity of AI’s advancement.

Another example of an agency-related entity is the newly established Foundation for Energy Security and Innovation (FESI), authorized by Congress via the 2022 CHIPS and Science Act following years of community advocacy to support the mission of the Department of Energy (DOE) in advancing energy technologies and promoting energy security. FESI released a Request for Information in February 2023 to seek input on DOE engagement opportunities with FESI and appointed its inaugural board of directors in May 2024.

NIST itself has demonstrated interest in the potential for expanded partnership mechanisms such as an agency-related foundation. In its 2019 report, the agency notes that “foundations have the potential to advance the accomplishment of agency missions by attracting private sector investment to accelerate technology maturation, transfer, and commercialization of an agency’s R&D outcomes.” NIST is uniquely suited to benefit from an agency-related foundation and its partnership flexibilities, given that it works on behalf of, and in collaboration with, industry on R&D and to develop standards, measurements, regulations, and guidance.

But how could NIST actually leverage a foundation model? A June 2024 paper from the Institute for Progress presents ideas for how a foundation model could support NIST’s work on AI and emerging tech. These include setting up a technical fellowship program that can compete with formidable companies in the AI space for top talent; quickly raising money and deploying resources to conduct “rapid capability evaluations for the risks and benefits of new AI systems”; and hosting large-scale prize competitions to develop “complex capabilities benchmarks for artificial intelligence” that would not be subject to usual monetary limitations and procedural burdens.

A NIST Foundation, of course, would have implications for the agency’s work beyond AI and other emerging technologies. Interviews with experts at the Federation of American Scientists working across various S&T domains have revealed additional use cases for a NIST Foundation that map to the agency’s topical areas, including but not limited to: 

Critical to the success of a foundation model is for it to have the funding needed to support NIST’s mission and programs. While it is difficult to estimate exactly how much funding a NIST Foundation could draw in from external sources, there is clearly significant appetite from philanthropy to invest in AI research and initiatives. Reporting from Inside Philanthropy uncovered that some of the biggest philanthropic institutions and individual donors—such as Eric and Wendy Schmidt and Open Philanthropy—have donated approximately $1.5 billion to date to AI work. And in November 2023, 10 major philanthropies announced they were committing $200 million to fund “public interest efforts to mitigate AI harms and promote responsible use and innovation.”

Plan of Action

In order to enable NIST to more effectively and efficiently deliver on its mission, especially as it relates to rapid advancement in AI, Congress should authorize the establishment of a NIST Foundation. While the structure of agency-related foundations may vary depending on the agency they support, they all have several high-level elements in common, including but not limited to:

The activities of existing agency-related foundations have left them subject to criticism over potential conflicts of interest. A 2019 Congressional Research Service report highlights several case studies demonstrating concerning industry influence over foundation activities, including allegations that the National Football League (NFL) attempted to influence the selection of research applicants for a National Institutes of Health (NIH) study on chronic traumatic encephalopathy, funded by the NFL through the FNIH, and the implications of the Coca-Cola Company making donations to the CDC Foundation for obesity and diet research.

In order to mitigate conflict of interest, transparency, and oversight issues, a NIST Foundation should consider rigorous policies that ensure a clear separation between external donations and decisions related to projects. Foundation policies and communications with donors should make explicit that donations will not result in specific project focus, and that donors will have no decision-making authority as it relates to project management. Donors would have to disclose any potential interests in foundation projects they would like to fund and would not be allowed to be listed as “anonymous” in the foundation’s regular financial reporting and auditing processes.

Additionally, instituting mechanisms for engaging with a diverse range of stakeholders is key to ensure the Foundation’s activities align with NIST’s mission and programs. One option is to mandate the establishment of a foundation advisory board composed of topical committees that map to those at NIST (such as AI) and staffed with experts across industry, academia, government, and advocacy groups who can provide guidance on strategic priorities and proposed initiatives. Many initiatives that the foundation might engage in on behalf of NIST, such as AI safety, would also benefit from strong public engagement (through required public forums and diverse stakeholder focus groups preceding program stand-up) to ensure that partnerships and programs address a broad range of potential ethical considerations and serve a public benefit.

Alongside specific structural components for a NIST Foundation, metrics will help measure its effectiveness. While quantitative measures only tell half the story, they are a starting point for evaluating whether a foundation is delivering its intended impact. Examples of potential metrics include:

Conclusion

Given financial and structural constraints, NIST risks being unable to quickly and efficiently fulfill its mandate related to AI, at a time when innovative technologies, systems, and governance structures are sorely needed to keep pace with a rapidly advancing field. Establishing a NIST Foundation to support the agency’s AI work and other priorities would bolster NIST’s capacity to innovate and set technical standards, thus encouraging the safe, reliable, and ethical deployment of AI technologies. It would also increase trust in AI technologies and lead to greater uptake of AI across various sectors where it could drive economic growth, improve public services, and bolster U.S. global competitiveness. And it would help make the case for leveraging public-private partnership models to tackle other critical S&T priorities.

This idea is part of our AI Legislation Policy Sprint. To see all of the policy ideas spanning innovation, education, healthcare, and trust, safety, and privacy, head to our sprint landing page.