Revitalizing Federal Jobs Data: Unleashing the Potential of Emerging Roles

Emerging technologies and creative innovation are pivotal economic pillars for the future of the United States. These sectors not only promise economic growth but also offer avenues for social inclusion and environmental sustainability. However, the federal government lacks reliable and comprehensive data on these sectors, which hampers its ability to design and implement effective policies and programs. A key reason for this data gap is the outdated and inadequate job categories and classifications used by the Bureau of Labor Statistics (BLS).

The BLS is the main source of official statistics on employment, wages, and occupations in the U.S. Part of the agency’s role is to categorize different industries, which helps states, researchers and other outside parties measure and understand the size of certain industries or segments of the economy. Another BLS purpose is to use the Standard Occupational Classification (SOC) system to categorize and define jobs based on their duties, skills, and education requirements. This is how all federal workers and contracted federal workers are classified. For an agency to create and fill a role, it needs a classification or SOC. State and private employers also use the classifications and data to allocate funding and determine benefits related to different kinds of positions. 

Where no classification (SOC) or job exists, it is unclear whether hiring and contracting happen according to programmatic intent and in a timely manner. This is particularly concerning to some employers and federal agencies that need  to align numerous jobs with the provisions of Justice 40, the Inflation Reduction Act and the newly created American Climate Corps. Many of the roles imagined by the American Climate Corps do not have classifications. This poses a significant barrier for effective program and policy design related to green and tech jobs.

The SOC system is updated roughly once every 10 years. There is not a set comprehensive review schedule for that or the industry categories. Updates are topical, with the last broad revision taking place in 2018. Unemployment reports and data related to wages are updated annually, and other topics less predictably. Updates and work on the SOC systems and categories for what are broadly defined as “green jobs” stopped in 2013 due to sequestration. This means that the BLS data may not capture the current and future trends and dynamics of the green and innovation economies, which are constantly evolving and growing.Because the BLS does not have a separate category for green jobs, it identifies them based on a variety of  industry and occupation codes. The range spans restaurant industry SOCs to construction. Classifying positions this way cannot reflect the cross-cutting and interdisciplinary nature of green jobs. Moreover, the process may not account for the variations and nuances of green jobs, such as their environmental impact, social value, and skill level. For example, if you want to work with solar panels, there is a construction classification, but nothing for community design, specialized finance, nor any complementary typographies needed for projects at scale.

Similarly, the BLS does not have a separate category for tech jobs. It identifies them based on the “Information and Communication Technologies” occupational groups of the SOC system. Again, this approach may not adequately reflect the diversity and complexity of tech jobs, which may involve new and emerging skills and technologies that are not yet recognized by the BLS. There are no classifications for roles associated with machine learning or artificial intelligence. Where the private sector has a much-discussed large language model trainer role, the federal system has no such classification. Appropriate skills matching, resource allocation, and the ability to measure the numbers and impacts of these jobs on the economy will be difficult if not impossible to fully understand. Classifying tech jobs in this manner may not account for the interplay and integration of tech jobs with other sectors, such as health care, education, and manufacturing.

These data limitations have serious implications for policy design and evaluation. Without accurate and timely data on green and tech jobs, the federal government may not be able to assess the demand and supply of these jobs, identify skill gaps and training needs, allocate resources, and measure the outcomes and impacts of its policies and programs. This will  result in missed opportunities, wasted resources, and suboptimal outcomes.

There is a need to update the BLS job categories and classifications to better reflect the realities and potentials of the green and innovation economies. This can be achieved by implementing the following strategic policy measures:

By updating the BLS job categories and classifications, the federal government can ensure that its data and statistics accurately reflect the current and future job market, thereby supporting effective policy design and evaluation related to green and tech jobs. Accurate and current data that mirrors the ever-evolving job market will also lay the foundation for effective policy design and evaluation in the realms of green and tech jobs. This commitment can contribute to the development of a workforce that not only meets economic needs but also aligns with the nation’s environmental aspirations.

AI in Action: Recommendations for AI Policy in Health, Education, and Labor

The Ranking Member of the Senate Committee on Health, Education, Labor, & Pensions (HELP) recently requested information regarding AI in our healthcare system, in the classroom, and in the workplace. The Federation of American Scientists was happy to provide feedback on the Committee’s questions. Targeted investments and a clear-eyed vision of the future of AI in these domains will allow the U.S. to reap more of the potential benefits of AI while preventing some of the costs.

This response provides recommendations on leveraging AI to improve education, healthcare, and the future of work. Key points include:

Overall, with thoughtful oversight and human-centric design, AI promises immense benefits across these sectors. But responsible governance is crucial, as is inclusive development and ongoing risk assessment. By bringing together stakeholders, the U.S. can lead in advancing ethical, high-impact applications of AI.


The Federation of American Scientists (FAS) co-leads the Alliance for Learning Innovation (ALI), a coalition of cross-sector organizations seeking to build a stronger, more competitive research and development (R&D) infrastructure in U.S. education. As was noted in the ALI Coalition’s response to White House Office of Science & Technology Policy’s “Request for Information: National Priorities for Artificial Intelligence,” FAS sees great promise and opportunity for artificial intelligence to improve education, equity, economic opportunity, and national security. In order to realize this opportunity and mitigate risks, we must ensure that the U.S. has a robust, inclusive, and updated education R&D ecosystem that crosscuts federal agencies.

What Should The Federal Role Be In Supporting AI In Education?

Research And Development

The U.S. government should prioritize funding and supporting R&D in the field of AI to ensure that the U.S. is on the cutting edge of this technology. One strong existing federal example are the AI Institutes supported by the National Science Foundation (NSF) and the U.S. Department of Education (ED). Earlier this year, NSF and the Institute of Education Sciences (IES) established the AI Institute for Exceptional Children, which capitalizes on the latest AI research to serve children with speech and language pathology needs. Communities would benefit from additional AI Institutes that meet the moment and deliver solutions for today’s teaching and learning challenges.

Expanding Research Grant Programs

Federal agencies, and specifically IES, should build upon the training programs it has for broadening participation and create specific research grant programs for minority-serving institutions with an emphasis on AI research. While the IES Pathways program has had success in diversifying education research training programs, more needs to be done at the predoctoral and postdoctoral level.

National Center For Advanced Development In Education

Another key opportunity to support transformational AI research and development in the United States is to establish a National Center for Advanced Development in Education (NCADE). Modeled after the Defense Advanced Research Projects Agency (DARPA), NCADE would support large-scale, innovative projects that require a more nimble and responsive program management approach than currently in place. The Center would focus on breakthrough technologies, new pedagogical approaches, innovative learning models, and more efficient, reliable, and valid forms of assessments. By creating NCADE, Congress can seed the development and use of artificial intelligence to support teaching, personalize learning, support ELL students, and analyze speech and reading.

How Can We Ensure That AI Systems Are Designed, Developed, And Deployed In A Manner That Protects People’s Rights And Safety?

First and foremost, we need to ensure that underserved communities, minors, individuals with disabilities and the civil rights organizations that support them are at the table throughout the design process for AI tools and products. In particular, we need to ensure that research is led and driven locally and by those who are closest to the challenges, namely educators, parents, students, and local and state leaders.

When thoughtfully and inclusively designed, AI has the potential to enhance equity by providing more personalized learning for students and by supporting educators to address the individual and diverse needs in their classrooms. For example, AI could be utilized in teacher preparation programs to ensure that educators have access to more diverse experiences during their pre-service experiences. AI can also provide benefits and services to students and families who currently do not have access to those resources due to a lack of human capital.


What Role Will AI Play In Creating New Jobs?

AI can serve as a powerful tool for workforce systems, employers, and employees alike in order to drive job creation and upskilling. For instance, investment in large language learning models that scrape and synthesize real-time labor market information (LMI) can be used to better inform employers and industry consortia about pervasive skills gaps. Currently, most advanced real-time LMI products exist behind paywalls, but Congress should consider investing in the power of this information as a public good to forge a more competitive labor market.

The wide-scale commercialization of AI/ML-based products and services will also create new types of jobs and occupations for workers. Contrary to popular belief, many industries that face some level of automation will still require trained employees to pivot to emerging needs in a way that offsets the obsoletion of other roles. Through place-based partnerships between employers and training institutions (e.g., community colleges, work-based learning programs, etc.), localities can reinvest in their workers to provide transition opportunities and close labor market gaps.

What Role Will AI Standards Play In Regulatory And Self-Regulatory Efforts?

AI standards will serve as a crucial foundation as the U.S. government and industries navigate AI’s impacts on the workforce. The NIST AI Risk Management Framework provides a methodology for organizations to assess and mitigate risks across the AI lifecycle. This could enable more responsible automation in HR contexts—for example, helping ensure bias mitigation in algorithmic hiring tools. On the policy side, lawmakers drafting regulations around AI and employment will likely reference and even codify elements of the Framework.

On the industry side, responsible technology leaders are already using the NIST AI RMF for self-regulation. By proactively auditing and mitigating risks in internal AI systems, companies can build public trust and reduce the need for excessive government intervention. Though policymakers still have an oversight role, widespread self-regulation using shared frameworks is at this point the most efficient path for safe and responsible AI across the labor market.


What Updates To The Regulatory Frameworks For Drugs And Biologics Should Congress Consider To Facilitate Innovation In AI Applications?

Congress has an opportunity to update regulations to enable responsible innovation and oversight for AI applications in biopharma. For example, Congress could consider expanding the FDA’s mandate and capacity to require upfront risk assessments before deployment of particularly high-risk or dual-use bio-AI systems. This approach is currently used by DARPA for some autonomous and biological technologies.

Additionally, thoughtful reporting requirements could be instituted for entities developing advanced bio-AI models above a certain capability threshold. This transparency would allow for monitoring of dual-use risks while avoiding overregulation of basic research. 

How Can The FDA Improve The Use Of AI In Medical Devices? 

Ensuring That Analysis Of Subpopulation Performance Is A Key Component Of The Review Process For AI Tools

Analyzing data on the subpopulation performance of medical devices should be one key component of any comprehensive effort to advance equity in medical innovation. We appreciate the recommendations in the GOP HELP white paper asking developers to document the performance of their devices on various subpopulations when considering updates and modifications. It will be essential to assess subpopulation performance to mitigate harms that may otherwise arise—especially if an argument for equity is made for a certain product. 

Clarifying The Role Of Real-World Evidence In Approvals

Locating concerns regarding performance in subpopulations and within different medical environments will most likely involve the collection of real-world evidence regarding the performance of these tools in the wild. The role of real-world evidence in the regulatory approval process for market surveillance and updates should be defined more clearly in this guidance. 

How Can AI Be Best Adopted To Not Inappropriately Deny Patients Care?

AI Centers of Excellence could be established to develop demonstration AI tools for specific care populations and care environments. For example, FAS has published a Day One Memo proposing an AI Center of Excellence for Maternal Health to bring together data sources, then analyze, diagnose, and address maternal health disparities, all while demonstrating trustworthy and responsible AI principles. The benefits of AI Centers of Excellence are two-fold: they provide an opportunity for coordination across the federal government, and they can evaluate existing datasets to establish high-priority, high-impact applications of AI-enabled research for improving clinical care guidelines and tools for healthcare providers. 

The AI Center of Excellence model demonstrates the power of coordinating and thoughtfully applying AI tools across disparate federal data sources to address urgent public health needs. Similar centers could be established to tackle other complex challenges at the intersection of health, environmental, socioeconomic, and demographic factors. For example, an AI Center focused on childhood asthma could integrate housing data, EPA air quality data, Medicaid records, and school absenteeism data to understand and predict asthma triggers.

Harnessing the Promise of AI

Artificial intelligence holds tremendous potential to transform education, healthcare, and work for the better. But realizing these benefits in an equitable, ethical way requires proactive collaboration between policymakers, researchers, civil society groups, and industry.

The recommendations outlined here aim to strike a balance—enabling innovation and growth while centering human needs and mitigating risks. This requires robust funding for R&D, modernized regulations, voluntary standards, and inclusive design principles. Ongoing oversight and impact evaluation will be key, as will coordination across agencies and stakeholders.

Defense Science Board on Avoiding Strategic Surprise

The Department of Defense needs to take several steps in order to avoid “strategic surprise” by an adversary over the coming decade, according to a new study from the Defense Science Board, a Pentagon advisory body.

Among those steps, “Counterintelligence must be enhanced with urgency.” See DSB Summer Study Report on Strategic Surprise, July 2015.

The Board called for “continuous monitoring” of cleared personnel who have access to particularly sensitive information. “The use of big data analytics could allow DoD to track anomalies in the behaviors of cleared personnel in order to thwart the insider threat.”

“Continuous monitoring” involves constant surveillance of an employee’s activities (especially online activities), and it goes beyond the “continuous evaluation” of potentially derogatory information that is an emerging part of the current insider threat program.

“Insider actions often generate suspicious indicators in multiple and organizationally separate domains–physical, personnel, and cyber security. The use of big data and creative analytics can be carefully tuned to the style and workflow of the particular organization and can help to audit for integrity as well as individual user legitimacy,” the DSB report said.

The DSB report broadly addressed opportunities and vulnerabilities in eight domains: countering nuclear proliferation; ballistic and cruise missile defense; space security; undersea warfare; cyber (“The Department should treat cyber as a military capability of the highest priority”); communications and positioning, navigation, and timing (PNT); counterintelligence; and logistics resilience.

To an outside reader, the DSB report seems one-dimensional and oddly disconnected from current realities. It does not consider whether the pursuit of any of its recommended courses of actions could have unintended consequences. It does not inquire whether there are high-level national policies that would make strategic surprise more or less likely. And it does not acknowledge the recurring failure of the budget process to produce a defense budget that is responsive to national requirements in a timely fashion.