FY24 NDAA AI Tracker

As both the House and Senate gear up to vote on the National Defense Authorization Act (NDAA), FAS is launching this live blog post to track all proposals around artificial intelligence (AI) that have been included in the NDAA. In this rapidly evolving field, these provisions indicate how AI now plays a pivotal role in our defense strategies and national security framework. This tracker will be updated following major updates.

Senate NDAA. This table summarizes the provisions related to AI from the version of the Senate NDAA that advanced out of committee on July 11. Links to the section of the bill describing these provisions can be found in the “section” column. Provisions that have been added in the manager’s package are in red font. Updates from Senate Appropriations committee and the House NDAA are in blue.

Senate NDAA Provisions
ProvisionSummarySection
Generative AI Detection and Watermark CompetitionDirects Under Secretary of Defense for Research and Engineering to create a competition for technology that detects and watermarks the use of generative artificial intelligence.218
DoD Prize Competitions for Business Systems ModernizationAuthorizes competitions to improve military business systems, emphasizing the integration of AI where possible.221
Broad review and update of DoD AI StrategyDirects the Secretary of Defense to perform a periodic review and update of its 2018 AI strategy, and to develop and issue new guidance on a broad range of AI issues, including adoption of AI within DoD, ethical principles for AI, mitigation of bias in AI, cybersecurity of generative AI, and more.222
Strategy and assessment on use of automation and AI for shipyard optimizationDevelopment of a strategy on the use of AI for Navy shipyard logistics332
Strategy for talent development and management of DoD Computer Programming WorkforceEstablishes a policy for “appropriate” talent development and management policies, including for AI skills.1081
Sense of the Senate Resolution in Support of NATOOffers support for NATO and NATO’s DIANA program as critical to AI and other strategic priorities1238 | 1239
Enhancing defense partnership with IndiaDirects DoD to enhance defense partnership with India, including collaboration on AI as one potential priority area.1251
Specification of Duties for Electronic Warfare Executive CommitteeAmends US code to specify the duties of the Electronic Warfare Executive Committee, including an assessment of the need for automated, AI/ML-based electronic warfare capabilities1541
Next Generation Cyber Red TeamsDirects the DoD and NSA to submit a plan to modernize cyber red-teaming capabilities, ensuring the ability to emulate possible threats, including from AI1604
Management of Data Assets by Chief Digital OfficerOutlines responsibilities for CDAO to provide data analytics capabilities needed for “global cyber-social domain.”1605
Developing Digital Content Provenance CourseDirects Director of Defense Media Activity to develop a course on digital content provenance, including digital forgeries developed with AI systems, e.g. AI-generated “deepfakes,”1622

Report on Artificial Intelligence Regulation in Financial Services Industry

Directs regulators of the financial services industry to produce reports analyzing how AI is and ought to be used by the industry and by regulators6096

AI Bug Bounty Programs

Directs CDAO to develop a bug bounty program for AI foundation models that are being integrated in DOD operations6097

Vulnerability analysis study for AI-enabled military applications

Directs CDAO to complete a study analyzing vulnerabilities to the privacy, security, and accuracy of AI-enabled military applications, as well as R&D needs for such applications, including foundation models.6098

Report on Data Sharing and Coordination

Directs SecDef to to submit a report on ways to improve data sharing across DoD6099

Establishment of Chief AI Officer of the Department of State

Establishes within the Department of State a Chief AI Officer, who may also serve as Chief Data Officer to oversee adoption of AI in the Department and to advise the Secretary of State on the use of AI in conducting data-informed diplomacy.6303

House NDAA. This table summarizes the provisions related to AI from the version of the House NDAA that advanced out of committee. Links to the section of the bill describing these provisions can be found in the “section” column.

House NDAA Provisions
ProvisionSummarySection
Process to ensure the responsible development and use of artificial intelligenceDirects CDAO to develop a process for assessing whether AI technology used by DoD is functioning responsibly, including through the development of clear standards, and to amend AI technology as needed220
Intellectual property strategyDirects DoD to develop an intellectual property strategy to enhance capabilities in procurement of emerging technologies and capabilities263
Study on establishment of centralized platform for development and testing of autonomy softwareDirects SecDef and CDAO to conduct a study, assessing the feasibility and advisability of developing a centralized platform to develop and test autonomous software.264
Congressional notification of changes to Department of Defense policy on autonomy in weapon systemsRequires that Congress be notified of changes to DoD Directive 3000.09 (on autonomy in weapons systems) within 30 days of any changes266
Sense of Congress on dual use innovative technology for the robotic combat vehicle of the ArmyThis offers support for the Army’s acquisition strategy for the Robot Combat Vehicle program, and recommends that the Army consider a similar framework for future similar programs.267
Pilot program on optimization of aerial refueling and fuel management in contested logistics environments through use of artificial intelligenceDirects CDAO, USD(A&S), and Air Force to develop a pilot program to optimize the logistics of aerial refueling and to consider the use of AI technology to help with this mission.266
Modification to acquisition authority of the senior official with principal responsibility for artificial intelligence and machine learningIncreases annual acquisition authority for CDAO from $75M to $125M, and extends this authority from 2025 to 2029.827
Framework for classification of autonomous capabilitiesDirects CDAO and others within DoD to establish a department-wide classification framework for autonomous capabilities to enable easier use of autonomous systems in the department.930

Funding Comparison. The following tables compare the funding requested in the President’s budget to funds that are authorized in current House and Senate versions of the NDAA. All amounts are in thousands of dollars.

Funding Comparison
ProgramRequestedAuthorized in HouseAuthorized in SenateNEW! Passed in Senate Approps 7/27NEW! Passed in full House 9/28
Other Procurement, Army–Engineer (non-construction) equipment: Robotics and Applique Systems68,89368,89368,893

65,118 (-8,775 for “Effort previously funded,” +5,000 for “Soldier borne sensor”)

73,893 (+5,000 for “Soldier borne sensor”)

AI/ML Basic Research, Army10,70810,70810,708

10,708

10,708

AI/ML Technologies, Army24,14224,14224,142

27,142 (+3,000 for “Automated battle damage assessment and adjust fire”)

24

AI/ML Advanced Technologies, Army13,18715,687
(+ 2,500 for “Autonomous Long Range Resupply”)
18,187
(+ 5,000 for “Tactical AI & ML”)

24,687 (+11,500 for “Cognitive computing architecture
for military systems”)

13,187

AI Decision Aids for Army Missile Defense Systems Integration06,0000

0

0

Robotics Development, Army3,0243,0243,024

3,024

3,024

Ground Robotics, Army35,31935,31935,319

17,337 (-17,982 for “SMET Inc II early to need”)

45,319 (+10,000 for “common robotic controller”)

Applied Research, Navy: Long endurance mobile autonomous passive acoustic sensing research02,5000

0

0

Advanced Components, Navy: Autonomous surface and underwater dual-modality vehicles05,0000

3,000

0

Air Force University Affiliated Research Center (UARC)—Tactical Autonomy8,0188,0188,018

8,018

8,018

Air Force Applied Research: Secure Interference Avoiding Connectivity of Autonomous AI Machines03,0005,000

0

0

Air Force Advanced Technology Development: Semiautonomous adversary air platform0010,000

0

0

Advanced Technology Development, Air Force: High accuracy robotics02,5000

0

0

Air Force Autonomous Collaborative Platforms118,826176,013
(+ 75,000 for Project 647123: Air-Air Refueling TMRR,
-17,813 for Technical realignment )
101,013
(- 17,813 for DAF requested realignment of funds)

101,013

101,013

Space Force: Machine Learning Techniques for Radio Frequency (RF) Signal Monitoring and Interference Detection010,0000

0

0

Defense-wide: Autonomous resupply for contested logistics02,5000

0

0

Military Construction–Pennsylvania Navy Naval Surface Warfare Center Philadelphia: AI Machinery Control Development Center088,20088,200

0

0

Intelligent Autonomous Systems for Seabed Warfare007,000

5,000

0

Funding for Office of Chief Digital and Artificial Intelligence Officer
ProgramRequestedAuthorized in HouseAuthorized in SenateNEW! Passed in Senate AppropsNEW! Passed in full House
Advanced Component Development and Prototypes34,35034,35034,350

34,350

34,350

System Development and Demonstration615,245570,246
(-40,000 for “insufficient justification,” -5,000 for “program decrease.”)
615,246

246,003 (-369,243, mostly for functional transfers to JADC2 and Alpha-1)

704,527 (+89,281, mostly for “management innovation pilot” and transfers from other programs for “enterprise digital alignment”)

Research, Development, Test, and Evaluation17,24717,24717,247

6,882 (-10,365, “Functional transfer to line 130B for ALPHA-1″)

13,447 (-3,800 for “excess growth”)

Senior Leadership Training Courses02,7500

0

0

ALPHA-1000

222,723

0


On Senate Approps Provisions

The Senate Appropriations Committee generally provided what was requested in the White House’s budget regarding artificial intelligence (AI) and machine learning (ML), or exceeded it. AI was one of the top-line takeaways from the Committee’s summary of the defense appropriations bill. Particular attention has been paid to initiatives that cut across the Department of Defense, especially the Chief Digital and Artificial Intelligence Office (CDAO) and a new initiative called Alpha-1. The Committee is supportive of Joint All-Domain Command and Control (JADC2) integration and the recommendations of the National Security Commission on Artificial Intelligence (NSCAI).

On House final bill provisions

Like the Senate Appropriations bill, the House of Representatives’ final bill generally provided or exceeded what was requested in the White House budget regarding AI and ML. However, in contract to the Senate Appropriations bill, AI was not a particularly high-priority takeaway in the House’s summary. The only note about AI in the House Appropriations Committee’s summary of the bill was in the context of digital transformation of business practices. Program increases were spread throughout the branches’ Research, Development, Test, and Evaluation budgets, with a particular concentration of increased funding for the Defense Innovation Unit’s AI-related budget.

Systems Thinking In Entrepreneurship Or: How I Learned To Stop Worrying And Love “Entrepreneurial Ecosystems”

As someone who works remotely and travels quite a long way to be with my colleagues, I really value my “water cooler moments” in the FAS office, when I have them. The idea for this series came from one such moment, when Josh Schoop and I were sharing a sparkling water break. Systems thinking, we realized, is a through line in many parts of our work, and part of the mental model that we share that leads to effective change making in complex, adaptive systems. In the geekiest possible terms:

A diagram of 'water cooler conversations' from a Systems Thinking perspective
Figure 1: Why Water Cooler Conversations Work

Systems analysis had been a feature of Josh’s dissertation, while I had had an opportunity to study a slightly more “quant” version of the same concepts under John Sterman at MIT Sloan, through my System Dynamics coursework. The more we thought about it, systems thinking and system dynamics were present across the team at FAS–from our brilliant colleague Alice Wu, who had recently given a presentation on Tipping Points, to folks who had studied the topic more formally as engineers, or as students at Michigan and MIT.  This led to the first meeting of our FAS “Systems Thinking Caucus” and inspired  a series of blog posts which intend to make this philosophical through-line more clear. This is just the first, and describes how and why systems thinking is so important in the context of entrepreneurship policy, and how systems modeling can help us better understand which policies are effective. 


The first time I heard someone described as an “ecosystem builder,” I am pretty sure that my eyes rolled involuntarily. The entrepreneurial community, which I have spent my career supporting, building, and growing, has been my professional home for the last 15 years. I came to this work not out of academia, but out of experience as an entrepreneur and leader of entrepreneur support programs. As a result, I’ve always taken a pragmatic approach to my work, and avoided (even derided) buzzwords that make it harder to communicate about our priorities and goals. In the world of tech startups, in which so much of my work has roots, buzzwords from “MVP” to “traction” are almost a compulsion. Calling a community an “ecosystem” seemed no different to me, and totally unnecessary. 

And yet, over the years, I’ve come to tolerate, understand, and eventually embrace “ecosystems.” Not because it comes naturally, and not because it’s the easiest word to understand, but because it’s the most accurate descriptor of my experience and the dynamics I’ve witnessed first-hand. 

So what, exactly, are innovation ecosystems? 

My understanding of innovation ecosystems is grounded first in the experience of navigating one in my hometown of Kansas City–first, as a newly minted entrepreneur, desperately seeking help understanding how to do taxes, and later as a leader of an entrepreneur support organization (ESO), a philanthropic funder, and most recently, as an angel investor. It’s also informed by the academic work of Dr. Fiona Murray and Dr. Phil Budden. The first time that I saw their stakeholder model of innovation ecosystems, it crystallized what I had learned through 15 years of trial-and-error into a simple framework. It resonated fully with what I had seen firsthand as an entrepreneur desperate for help and advice–that innovation ecosystems are fundamentally made up of people and institutions that generally fall into the same categories:  entrepreneurs, risk capital, universities, government, or corporations. 

Over time–both as a student and as an ecosystem builder, I came to see the complexity embedded in this seemingly simple idea and evolved my view. Today, I amend that model of innovation ecosystems to, essentially, split universities into two stakeholder groups: research institutions and workforce development. I take this view because, though not every secondary institution is a world-leading research university like MIT, smaller and less research-focused colleges and universities play important roles in an innovation ecosystem. Where is the room for institutions like community colleges, workforce development boards, or even libraries in a discussion that is dominated by the need to commercialize federally-funded research? Two goals–the production of human capital and the production of intellectual property–can also sometimes be in tension in larger universities, and thus are usually represented by different people with different ambitions and incentives. The concerns of  a tech transfer office leader are very different from those of a professor in an engineering or business school, though they work for the same institution and may share the same overarching aspirations for a community. Splitting the university stakeholder into two different stakeholder groups makes the most sense to me–but the rest of the stakeholder model comes directly from Dr. Murray and Dr. Budden. 

IMAGE: An innovation ecosystem stakeholder model a network of labeled nodes, including entrepreneur, workforce, research, corporations, government, and capital nodes, each connected to the other.
Figure 2: Innovation Ecosystem Stakeholder Model

One important consideration in thinking about innovation ecosystems is that boundaries really do matter. Innovation ecosystems are characterized by the cooperation and coordination of these stakeholder groups–but not everything these stakeholders do is germane to their participation in the ecosystem, even when it’s relevant to the industry that the group is trying to build or support. 

As an example, imagine a community that is working to build a biotech innovation ecosystem. Does the relocation of a new biotech company to the area meaningfully improve the ecosystem? Well, that depends! It might, if that company actively engages in efforts to build the ecosystem say, by directing an executive to serve on the board of an ecosystem building nonprofit, helping to inform workforce development programs relevant to their talent needs, instructing their internal VC to attend the local accelerator’s demo day, offering dormant lab space in their core facility to a cash-strapped startup at cost, or engaging in sponsored research with the local university. Relocation of the company may not improve the ecosystem  if they simply happen to be working in the targeted industry and receive a relocation tax credit. In short, by itself, shared work between two stakeholders on an industry theme does not constitute ecosystem building. That shared work must advance a vision that is shared by all of the stakeholders that are core to the work.

Who are the stakeholders in innovation ecosystems? 

Innovation ecosystems are fundamentally made up of six different kinds of stakeholders, who, ideally, work together to advance a shared vision  grounded in a desire to make the entrepreneurial experience easier. One of the mistakes I often see in efforts to build innovation ecosystems is an imbalance or an absence of a critical stakeholder group. Building innovation ecosystems is not just about involving many people (though it helps), it’s about involving people that represent different institutions and can help influence those institutions to deploy resources in support of a common effort. Ensuring stakeholder engagement is not a passive box-checking activity, but an active resource-gathering one. 

An innovation ecosystem in which one or more stakeholders is absent will likely struggle to make an impact. Entrepreneurs with no access to capital don’t go very far, nor do economic development efforts without government buy-in, or a workforce training program without employers. 

In the context of today’s bevvy of federal innovation grant opportunities with 60-day deadlines, it can be tempting to “go to war with the army you have” instead of prioritizing efforts to build relationships with new corporate partners or VCs. But how would you feel if you were “invited” to do a lot of work and deploy your limited resources to advance a plan that you had no hand in developing? Ecosystem efforts that invest time in building relationships and trust early will benefit from their coordination, regardless of federal funding.  

These six stakeholder groups are listed in Figure 2 and include: 

In the context of regional, place-based innovation clusters (including tech hubs), this stakeholder model is a tool that can help a burgeoning coalition both assess the quality and capacity of their ecosystem in relation to a specific technology area and provide a guide to prompt broad convening activities. From the standpoint of a government funder of innovation ecosystems, this model can be used as a foundation for conducting due diligence on the breadth and engagement of emerging coalitions. It can also be used to help articulate the shortcomings of a given community’s engagements, to highlight ecosystem strengths and weaknesses, and to design support and communities of practice that convene stakeholder groups across communities.

What about entrepreneur support organizations (ESO)? What about philanthropy? Where do they fit into the model? 

When I introduce this model to other ecosystem builders, one of the most common questions I get is, “where do ESOs fit in?” Most ESOs like to think of themselves as aligned with entrepreneurs, but that merits a few cautionary notes. First, the critical question you should ask to figure out where an ESO, a Chamber or any other shape-shifting organization fits into this model is, “what is their incentive structure?” That is to say, the most important thing is to understand to whom an organization is accountable. When I worked for the Enterprise Center in Johnson County, despite the fact that I would have sworn up-and-down that I belonged in the “E” category with the entrepreneurs I served, our sustaining funding was provided by the county government. My core incentive was to protect the interests of a political subdivision of the metro area, and a perceived failure to do that would have likely resulted in our organization’s funding being cut (or at least, in my being fired from it). That means that I truly was a “G,” or a government stakeholder. So, intrepid ESO leader, unless the people that fund, hire, and fire you are majority entrepreneurs, you’re likely not an “E.”

The second danger of assuming that ESOs are, in fact, entrepreneurs, is that it often leads to a lack of actual entrepreneurs in the conversation. ESOs stand in for entrepreneurs who are too busy to make it to the meeting. But the reality is that even the most well-meaning ESOs have a different incentive structure than entrepreneurs–meaning that it is very difficult for them to naturally represent the same views. Take for instance, a community survey of entrepreneurs that finds that entrepreneurs see “access to capital” as the primary barrier to their growth in a given community. In my experience, ESOs generally take that somewhat literally, and begin efforts to raise investment funds. Entrepreneurs, on the other hand who simply meant “I need more money,” might see many pathways to getting it, including by landing a big customer. (After all, revenue is the cheapest form of cash.) This often leads ESOs to prioritize problems that match their closest capabilities, or the initiatives most likely to be funded by government or philanthropic grants. Having entrepreneurs at the table directly is critically important, because they see the hairiest and most difficult problems first–and those are precisely the problems it take a big group of stakeholders to solve. 

Finally, I have seen folks ask a number of times where philanthropy fits into the model. The reality is that I’m not sure. My initial reaction is that most philanthropic organizations have a very clear strategic reason for funding work happening in ecosystems–their theory of change should make it clear which stakeholder views they represent. For example, a community foundation might act like a “government” stakeholder, while a funder of anti-poverty work who sees workforce development as part or their theory of change is quite clearly part of the “W” group. But not every philanthropy has such a clear view, and in some cases, I think philanthropic funders, especially those in small communities, can think of themselves as a “shadow stakeholder,” standing in for different viewpoints that are missing in a conversation. Finally, philanthropy might play a critical and underappreciated role as a “platform creator.” That is, they might seed the conversation about innovation ecosystems in a community, convene stakeholders for the first time, or fund activities that enable stakeholders to work and learn together, such as planning retreats, learning journeys, or simply buying the coffee or providing the conference room for a recurring meeting. Finally, and especially right now, philanthropy has an opportunity to act as an “accelerant,” supporting communities by offering the matching funds that are so critical to their success in leveraging federal funds.  

Why is “ecosystem” the right word? 

Innovation ecosystems, like natural systems, are both complex and adaptive. They are complex because they are systems of systems. Each stakeholder in an innovation ecosystem is not just one person, but a system of people and institutions with goals, histories, cultures, and personalities. Not surprisingly, these systems of systems are adaptive, because they are highly connected and thus produce unpredictable, ungovernable performance. It is very, very difficult to predict what will happen in a complex system, and most experts in fields like system dynamics will tell you that a model is never truly finished, it is just “bounded.” In fact, the way that the quality of a systems model is usually judged is based on how closely it maps to a reference mode of output in the past. This means that the best way to tell whether your systems model is any good is to give it “past” inputs, run it, and see how closely it compares to what actually happened. If I believe that job creation is dependent on inflation, the unemployment rate, availability of venture capital, and the number of computer science majors graduating from a local university, one way to test if that is truly the case is to input those numbers over the past 20 years, run a simulation of how many jobs would be created, according to the equations in my model, and seeing how closely that maps to the actual number of jobs created in my community over the same time period. If the line maps closely, you’ve got a good model. If it’s very different, try again, with more or different variables. It’s quite easy to see how this trial-and-error based process can end up with an infinitely expanding equation of increasing complexity, which is why the “bounds” of the model are important. 

Finally, complex, adaptive systems are, as my friend and George Mason University Professor Dr. Phil Auerswald says, “self-organizing and robust to intervention”. That is to say, it is nearly impossible to predict a linear outcome (or whether there will be any outcome at all) based on just a couple of variables. This means that the simple equation(money in = jobs out) is wrong. To be better able to understand the impact of a complex, adaptive system requires mapping the whole system and understanding how many different variables change cyclically and in relation to each other over a long period of time. It also requires understanding the stochastic nature of each variable. That is a very math-y way of saying it requires understanding the precise way in which each variable is unpredictable, or the shape of its bell-curve.

All of this is to say that understanding and evaluation of innovation ecosystems requires an entirely different approach than the linear jobs created = companies started * Moretti multiplier assumptions of the past. 

So how do you know if ecosystems are growing or succeeding if the number of jobs created doesn’t matter? 

The point of injecting complexity thinking into our view of ecosystems is not to create a sense of hopelessness. Complex things can be understood–they are not inherently chaotic. But trying to understand these ecosystems through traditional outputs and outcomes is not the right approach since those outputs and outcomes are so unpredictable in the context of a complex system. We need to think differently about what and how we measure to demonstrate success. The simplest and most reliable thing to measure in this situation then becomes the capacities of the stakeholders themselves, and the richness or quality of the connections between them. This is a topic we’ll dive into further in future posts.

A Profile of Defense Science & Tech Spending

Annual spending on defense science and technology has “grown substantially” over the past four decades from $2.3 billion in FY1978 to $13.4 billion in FY2018 or by nearly 90% in constant dollars, according to a new report from the Congressional Research Service.

Defense science and technology refers to the early stages of military research and development, including basic research (known by its budget code 6.1), applied research (6.2) and advanced technology development (6.3).

“While there is little direct opposition to Defense S&T spending in its own right,” the CRS report says, “there is intense competition for available dollars in the appropriations process,” such that sustained R&D spending is never guaranteed.

Still, “some have questioned the effectiveness of defense investments in R&D.”

CRS takes note of a 2012 article published by the Center for American Progress which argued that military spending was an inefficient way to spur innovation and that the growing sophistication of military technology was poorly suited to meet some low-tech threats such as improvised explosive devices (IEDs) in Iraq and Afghanistan (as discussed in an earlier article in the Bulletin of the Atomic Scientists).

The new CRS report presents an overview of the defense science and tech budget, its role in national defense, and questions about its proper size and proportion. See Defense Science and Technology Funding, February 21, 2018,

Other new and updated reports from the Congressional Research Service include the following.

Armed Conflict in Syria: Overview and U.S. Response, updated February 16, 2018

Jordan: Background and U.S. Relations, updated February 16, 2018

Bahrain: Reform, Security, and U.S. Policy, updated February 15, 2018

Potential Options for Electric Power Resiliency in the U.S. Virgin Islands, February 14, 2018

U.S. Manufacturing in International Perspective, updated February 21, 2018

Methane and Other Air Pollution Issues in Natural Gas Systems, updated February 15, 2018

Where Can Corporations Be Sued for Patent Infringement? Part ICRS Legal Sidebar, February 20, 2018

How Broad A Shield? A Brief Overview of Section 230 of the Communications Decency ActCRS Legal Sidebar, February 21, 2018

Russians Indicted for Online Election TrollingCRS Legal Sidebar, February 21, 2018

Hunting and Fishing on Federal Lands and Waters: Overview and Issues for Congress, February 14, 2018