Emerging Technology
day one project

Protecting America’s S&T Ecosystem

03.12.26 | 36 min read | Text by Cole Donovan

Over the past eighty years, the United States has maintained both economic and military primacy thanks to our technological superiority. Other countries have sought to replicate this advantage, some of whom (particularly the People’s Republic of China, Russia, Iran, and North Korea) have an interest in replacing the United States as the primary global power, including through coercive and clandestine measures. 

Current efforts are centered around National Security Presidential Memorandum 33 (NSPM-33). The memorandum was designed to deal with concerns related to a very specific problem: state-motivated concealment of ties with an adversarial military in activities at universities that were sponsored by the U.S. government.  Obviously, concealment and fraud cannot and should not be rewarded in the U.S. system.  

Over time, the bridge between concealment and espionage have proven difficult to establish in court, the research community writ large has (consequently) remained resistant to change, and the PRC has amplified the impact of U.S. research security measures and prosecutorial failures to drive a wedge between the government and members of the academic and Asian American communities.

It is prudent to recalibrate U.S. research security efforts to 1) maintain America’s ability to participate in global discovery by aligning research security efforts with associated risk of specific research activities and 2) create a clearer system for the identification of national security information (NSI) so as to enable protective measures, like the Espionage Act, to do their job.  The status quo, which relies on the notion that information must remain in the open and be widely shared while somehow remaining out of reach of our adversaries, must be rationalized.

It is in the interests of the United States to appropriately protect information that needs to be protected while maintaining our participation in new discoveries to maintain our competitive advantage.  Current efforts, which are focused on university faculty and partnerships, should be rebalanced to focus on risky technologies and ensuring that the source of most patented or sensitive technologies – the private sector – is adequately protected.  Our current efforts are costly, excluding talented researchers with global connections from participating in the science and technology ecosystem and cutting the United States out of global discovery.

Challenge and Opportunity 

There are a number of key challenges that a more effective research security regime will need to address:

NSPM-33 protects the government’s money from people, but it does not protect research or ideas from foreign appropriation

Ideas are the fundamental currency of technology competition.  When a researcher or team of researchers have an idea, they frequently turn to their government for the financial backing necessary to nurture it.  Under the NSPM-33 common forms requirement, the government’s evaluation of the proposal for funding includes an analysis of any conflicts of interest or commitment that individuals on the research team might have with an adversarial foreign government.  

The research agency is expected to decline the funding opportunity if an applicant for funding has an ongoing or recent relationship with institutions that are backed by adversarial governments or military organizations (among other potential conflicts).  In some circumstances, particularly when an individual or team intentionally misrepresents their relationships with adversarial governments, the federal agency may seek additional administrative remedies, including suspension and debarment.  Similar processes are in place in government agencies that operate cooperative user facilities, primarily our national laboratories. Lying is bad, and it goes without saying that lying to conceal a relationship with an adversarial government is also bad.

Unfortunately, the story doesn’t end there. Because the idea is still wholly in the possession of the individual who applied for a federal grant, the individual or team is now free (and even incentivized) to seek alternative sources of funding. The fact that the government declined to support their work based on research security concerns as opposed to merit could validate the researcher’s belief that they have a meritorious idea worth pursuing elsewhere.  A rare unscrupulous researcher could use that knowledge to determine the idea may be of interest to foreign militaries and actively look to adversarial governments for support.  In effect, the government’s review of a person’s institutional affiliations, followed by a decision to exclude them from U.S. government funding, and subsequent visa revocations or revocations of permanent residency increases the probability of hostile foreign acquisition of talent and technology. 

This conclusion is not based on speculation.

We’ve Been Here Before

History shows that strategic catastrophes follow when the government shuts people out of the American R&D ecosystem. In the 1950s, the United States detained and deported Qian Xuesen, co-founder of our Jet Propulsion Laboratory, based on fear that he supported the Chinese Communist Party.  He was tapped to build the PRC’s ballistic missile and space programs, creating the strategic situation we find ourselves in today.  Xuesen was so angry at the United States government that he later refused to take part in activities celebrating the normalization of relations.  The United States made a similar error with Erdal Arıkan, the father of 5G wireless technology’s polar codes, albeit not out of concern for his affiliations.  A lack of U.S. government support for long-term theoretical mathematics research led Arıkan to return from his positions in the U.S. to Turkey.  Some years later, he was approached by Huawei. Huawei representatives quickly assimilated Arıkan’s work and became the world’s leading supplier of wireless technology.  Subsequent government efforts to try to mitigate Huawei’s competitive advantage undoubtedly cost the U.S. taxpayer more than if we provided adequate or tailored funding for Arıkan’s career (and those of many other researchers) in an environment where it was more likely to be acquired by U.S. technology companies with less risk to the national security and the security of our international partners.  

I have been briefed on other examples.

Funding Cuts Compound the Talent Crisis & Encourage Exodus

The Executive Branch’s massive funding cuts for research make the current situation even more problematic, creating compounding incentives encouraging talent to leave the country.  Other countries have become better at attracting and retaining talent in recent years, weighing against the longstanding U.S. competitive advantage in talent attraction and retention.  This has been particularly notable in fields like artificial intelligence, where particularly productive individuals have made significant contributions to the development of AI models like DeepSeek.  The British-based talent intelligence firm Zeki Data points to the strengthening of markets for talent in Europe, the Persian Gulf, India, and China as additional reasons for a decline in U.S. AI talent attraction.  Recent surveys suggest that as much as seventy five percent of the scientific community is looking for opportunities overseas.  Notable recent departures from the United States due to green card denials, federal funding challenges, and federal layoffs include ChatGPT developer Kai Chen (moved to Canada), spaceflight safety expert Jonathan McDowell (moved to the UK), and carbon capture expert Yi Shouliang (moved to China).  Neuroscientist Ardem Patapoutian was offered 20 years of funding in China in “any city, any university” after being cut off from NIH funding as part of the Trump Administration’s recent cuts, earning himself a mention during Marcia McNutt’s State of Science address at the National Academies.

Result: Innovation Increasing Elsewhere, Excluding Americans

When we assume that genius stems primarily from the U.S. system, and in particular from government funding, we are surprised when innovation happens somewhere else.  The government’s principal method for controlling ideas and innovation only happens after a legal relationship between the researcher, their home institution, and the U.S. government is established.  For advanced technology development, the overwhelming majority of which is funded by the private sector, we are principally reliant on export controls (which have a dismal history of successfully limiting knowledge and advanced technology transfer).

Science agencies aren’t adequately equipped to mitigate harm (and some NSI probably isn’t appropriately identified or protected)

For decades, National Security Decision Directive 189 (NSDD-189) was the primary governing document for research security measures. When it was released at the height of U.S.-Soviet tensions during the Reagan Administration, the academic community focused primarily on one line–”It is the policy of this Administration that, to the maximum extent possible, the products of fundamental research remain unrestricted.” NSDD-189 was meant to cover only a subset of research – fundamental research – defined in the document as “basic and applied research in science and engineering, the results of which ordinarily are published and shared broadly within the scientific community, as distinguished from proprietary research and from industrial development, design, production, and product utilization, the results of which ordinarily are restricted for proprietary or national security reasons.”  On balance, the overwhelming majority of U.S. research performance and expenditure is not covered by NSDD-189, but rather developed by industry and the defense sector (which are covered later).

Understanding When Controls Must Be Used

Relatively little attention has been paid to NSDD-189’s second policy idea, which emphasizes that “where the national security requires control, the mechanism for control of information generated during federally-funded fundamental research in science, technology and engineering at colleges, universities and laboratories is classification.”  Proper identification of potentially classifiable information and NSI is essential for the Justice Department and Intelligence Community, who wish to protect such research under national security statutes.  The policy also acknowledges there may be alternative applicable statutes, like export control regimes, which have generally remained consistent with NSDD-189 (more on that later).  

The classification system remains the most straightforward way to give our security community the tools they need to protect America’s critical national assets.  This mirrors the findings of the 2019 JASON report on Fundamental Research Security, which argued that “Making the case, for classification reasons, that a new technology might be of national security value is far simpler than assessing its potential economic impact, even if economic security is equated in some way with national security.”

To fully appreciate what this means, one needs to understand when and why information should be classified. Classification is governed by Executive Order 13526 (E.O. 13526), which establishes the processes by which agencies establish national security classification protections.  The lowest level of classification, which is termed “confidential,” is defined as information that “the unauthorized disclosure of which reasonably could be expected to cause damage to the national security that the original classification authority is able to identify or describe.” The bar for identifying and labeling “confidential” classified information is relatively low, and yet it is one that many science agencies lack the authority to implement. 

Whether information should be treated as classified or unclassified is the responsibility of an Original Classification Authority (OCA).  These authorities are a short list of senior government officials authorized by the President to review information and determine whether or not it is classified.  Agencies that do not have representatives on the list are limited to being able to label information that is derived from other classified sources.  To determine whether new information must be classified, individuals from other agencies must depend on determinations made by individuals granted OCA or delegated classification authorities.

Classification Constraints for NSF, NIH, others

The trouble begins when we realize that extramural funding agencies like the National Science Foundation (NSF) and National Institutes of Health (NIH) have no officials possessing OCA at the Top Secret (or even Secret) level, making officials in those agencies entirely dependent on members of the security community and White House to make such determinations. A broad lack of access to classified facilities limits the access that program managers have to relevant intelligence sources.  Many NSF officials, for instance, need to travel offsite to engage with classified materials.  

As a result, I have been told that classification reviews of new and novel information produced by these agencies almost never happens.  If a researcher or program officer believes that the information that is produced in the course of research may damage national security and reports their concern to their funder in accordance with the terms of their grant (as is the case for NSF), the research agency lacks the authority to evaluate the information and defend their determination. Put another way, the research agency also lacks the authority to determine when edge-case information it produces should remain unclassified. 

E.O. 13526 anticipates this and allows for the referral of information to original classification authorities at other agencies.  Again, the need to refer elsewhere means that such reviews almost never happen, except for in those agencies which already possess the ability to identify and classify information (such as the Department of Defense or Department of Energy). Things get more absurd—one of the co-chairs in the Research Security Subcommittee of the National Science and Technology Council wasn’t able to read relevant research security-related intelligence products due to their reduced level of access.  This had the effect of limiting the materials that could be discussed by the research security subcommittee.

Science Agencies Placed in a Bind

When an officer from a science agency or some other official makes the claim that the research they support could harm U.S. national security if it is improperly disclosed, the question “why wasn’t it classified” must immediately follow.  A lack of original or derived classification authority means that the individual in question is not empowered to make the initial assessment of whether or not the information needs to be protected through the classification system.  This effectively places them in conflict with the substance of E.O.13526 and unable to fully implement NSDD-189.

Defenders of the status quo would correctly point out the few examples of harm caused by the unauthorized disclosure of early stage research. While this is true, especially for agencies like the NSF or NIH, the inability to make a determination when paired with more recent Congressionally-directed emphasis on later-stage research (under organizations like the NSF Technology and Innovation Partnerships Directorate, ARPA-H, and other longer-standing programs involving gain of function research) increases the likelihood that later-stage research that could have national security implications will not be appropriately identified and protected.  The expansion of research portfolios into applications must be accompanied by policies and a readiness to accept the resulting changes in responsibility.

The definition of basic and fundamental research has lost meaning over time and must be reestablished

During my time as Assistant Director for Research Security in the Office of Science and Technology Policy, security agencies would routinely point out that a fair number of basic research projects have fairly easily describable national security implications. I would agree with them and then take this one step further–a great deal of research that is categorized as basic research by science agencies fails to satisfy the statutory definition of basic research established for the Department of Defense.  That definition is as follows:

Basic research is a systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and of observable facts without specific applications towards processes or products in mind.  It includes all scientific study and experimentation directed toward increasing fundamental knowledge and understanding in those fields of the physical, engineering, environmental, and life sciences related to long-term national security needs. It is farsighted high payoff research that provides the basis for technological progress.

Similarly, the Federal Acquisition Regulations define the term as follows:

Basic research means that research directed toward increasing knowledge in science. The primary aim of basic research is a fuller knowledge or understanding of the subject under study, rather than any practical application of that knowledge.

The Export Administration Regulations (EAR) create two definitions to help establish what is considered fundamental research:

Fundamental research means research in science, engineering, or mathematics, the results of which ordinarily are published and shared broadly within the research community, and for which the researchers have not accepted restrictions for proprietary or national security reasons.

It is not considered fundamental research when there are restrictions placed on the outcome of the research or restrictions on methods used during the research. Proprietary research, industrial development, design, production, and product utilization the results of which are restricted and government funded research that specifically restricts the outcome for national security reasons are not considered fundamental research.

A fair number of defense research activities (including research supporting computing or hypersonics, as well as almost all “use-inspired research”) does not meet the statutory definition of basic research, but is often categorized as such despite the fact that the name of research or research field describes the applications or processes in mind.  In the decades after NSDD-189, agencies have expanded on this definition in order to become inclusive of use-inspired research, muddying the waters.  

For a final level-set, classified information and controlled unclassified information (CUI), cannot be defined as fundamental research in the eyes of the government as the act of controlling the information places the work outside the EAR’s fundamental research definition.  This should be fairly straightforward given the fundamental incompatibility between the two definitions.

Applied Research Can Also Suffocate in a Closed System

Individuals who defend applied research activities being primarily handled in the open often point out that the overwhelming majority of these activities need to happen in the open in order to maintain technological competitiveness, pointing to the success and strength of open source technology platforms.  I would agree and go one step farther: the vast majority of applied research would be worse off in a restricted environment, and there are many good reasons for why NSDD-189 included applied research in its definition of fundamental research.  I also would point out that the strength of the EAR’s definition is also the acknowledgement that as soon as one feels the need to protect information from misappropriation, that information cannot and should not be managed in the open.  It is incoherent to say that information needs to be in the open but also remain inaccessible by the Chinese government–the two notions are obviously mutually exclusive.  To quote the JASON report on Fundamental Research Security, “The fundamental research exemption is based on the idea that the general nature of the knowledge produced in fundamental research cannot be controlled.”

Gold Standard Science and Industrial Diffusion Require Openness

While it is true that basic and fundamental research provide the basis for information that could eventually be used to support the development of sensitive and national security relevant technologies, expanding the “grey space” of risky information to include almost all basic research (as the Department of Energy and other departments are apparently now doing, as reported by some university officials) risks information paralysis.  After all, literature review is an important first step in the scientific method, and shielding information from external eyes prevents research from being scrutinized in a way that results in the development of new questions or ideas, not to mention the entire system of scientific rigor.  While it may be true that basic research may be used to develop security relevant technologies, it is not necessarily possible to entirely derive security relevant technologies from individual discoveries.  Getting information into the hands of U.S. industrial players becomes much more difficult once it is restricted.  This is especially for startups, small businesses, and smaller universities that cannot afford to sponsor vetting all of their employees at the level of “public trust”or higher in order to gain access to protected U.S. government-sponsored discoveries, creating consequences for American innovation, competition, and participation.

All of this is more difficult in a world where artificial intelligence can aggregate and synthesize information more rapidly than a team of graduate research assistants. Such risks must be accounted for. Still, it makes more sense to establish a strong governance framework around AI use cases rather than reordering how we govern the rest of society after every technological breakthrough.  

Need to Limit the Grey Space

It is true that a great deal of research exists in a grey space where it needs to happen in the open in order to advance despite the fact that it may cause harm.  Decisions to allow research that could cause harm to remain in the open should reflect the careful consideration of government program managers paired with unambiguous instructions to award recipients.  It is in the capacity of the government (and not within the natural capabilities of our universities who have an incentive to seek the least restrictive framework possible) to determine and assess whether information might present a threat to national security and require that defensive measures be implemented, accordingly.  

Muddying the waters around which research needs to be protected and which research does not has real world consequences for U.S. universities and non-governmental research organizations.  When faced with ambiguity about whether or not they should be engaged in a particular research activity, university administrators frequently take the most cautious approach possible to protect themselves from potential liability and reputational damage.  The talent, on the other hand, retains their ability to choose to do their work elsewhere if an environment becomes too restrictive. 

Securitization has significant costs (including to American leadership)

Later in his life, Richard Feynman recounted a story about the impact of compartmentalization on the Manhattan Project.  Uranium-235 separation happened at Oak Ridge National Laboratory while the research underpinning that production took place at Los Alamos. Because the researchers at Oak Ridge didn’t have the fundamental understanding of nuclear physics necessary to troubleshoot their work, Uranium-235 production ran into numerous headwinds until Feynman and others managed to convince the government of the necessity of sharing knowledge with the Oak Ridge team.  Once the knowledge was shared, production went back on track.  Failure to share the information with other scientists risked a major accident at Oak Ridge, not to mention an end to the Manhattan Project.  In today’s environment, where incentives are structured in such a way that critical decisionmakers frequently sit on research security-related decisions rather than subjecting themselves to internal or Congressional scrutiny, I am less confident that someone like Feynman would reach a similar outcome.

Measuring Direct and Indirect Costs

It is relatively easy to measure the financial costs associated with research security measures. It is much more difficult to evaluate their effectiveness. The fact that there have been few publicized significant examples of national security harm resulting from the sharing of government-sponsored basic research makes having a serious discussion about the efficacy of our existing research security measures outside of classified environments almost impossible.

Costs include increased administrative burden, support staff, defensive cyber systems, facility access controls, and more.  These costs cannot be easily accounted for in an individual grant’s direct costs, so we can safely assume that decreases in support for indirect costs under the current administration will result in the cost of CHIPS and Science-mandated research security programs placing additional pressure on other scientific activities. Research security compliance costs are overwhelmingly treated by the government as mandatory expenditures, while (confoundingly) the cost of operating a world-class laboratory is not (except for in instances where a grant is explicitly for the development and maintenance of advanced scientific infrastructure).

Costs to American Competitiveness

While we can also measure the economic impact of fewer students and scholars in the United States, it is more difficult to measure the cost of the United States being absent from international opportunities.  Proposed legislation like the SAFE Research Act, while intended to prevent U.S. government funding from supporting research involving adversarial nations, effectively could result in U.S. researchers being forced to abandon high-potential discoveries produced by large and diverse international research consortia if researchers from adversarial countries are part of the team.  This would give the PRC veto power over the involvement of the U.S. government-supported individuals in international research consortia–an alarming consequence, especially when one considers the Australian Science Policy Institute’s recent finding that Chinese research institutions dominate 57 out of 64 critical technology fields and the fact that some international projects require the work of thousands of scientists from dozens of countries. In fields like space research and development, the PRC’s domestic talent base is sufficient to maintain competitiveness while they actively test capabilities where the United States is years away from deployment (and might beat the United States back to the moon).

The absence of U.S. researchers in such consortia does not mean that the research does not happen–it merely means that the consortium needs to find other partners who are able to do the work.  The fact that the infrastructure supporting such discoveries is frequently located exclusively in Europe and Asia (thanks in part to decades of U.S. underinvestment in research and development infrastructure) means that there is additional incentive for American researchers to travel and work abroad in order to maintain access to unique capabilities located in adversarial countries to advance their careers and test new technologies.  

This problem is particularly acute in fusion energy research and development, where U.S. companies maintain a technological advantage but lack the domestic extreme radiation environment testing infrastructure necessary to develop critical components necessary for long term operation (the inner walls of the reactor, or blanket).

A protect-oriented research security framework makes sense if the United States is safely in the lead and competitors are decades away from catching up.  But in a more competitive environment, when other countries become more productive and competitive, decisions that force the United States to abandon collaborative efforts are much more consequential. Given the relative size of the U.S. population with that of our competitors, this mitigates the ability of the United States to unilaterally define global rules governing the conduct of research.

Taken together with my previous point about the need to re-establish the definition of basic and fundamental research as a bright line test and giving science agencies greater ability to control information, I’m placing an awful lot of responsibility on the backs of federal research managers.  Having spent most of my professional life in the civil service as a clearance-holder, I would argue that managing such weighty decisions is exactly what federal employees are paid to do.  In the same talk mentioned earlier, Feynman described the admiration he had for the speed with which members of the security establishment could make decisions upon which the fate of the nation rests.  If we abandon the flexibility of officers to give informed answers to our research apparatus promptly, we do so at our own peril.

We know dangerously little about what is happening in Chinese laboratories, and fear is getting in the way of dealing with our knowledge gaps and urgent security challenges

As a former CIA colleague recently briefed Congressional staff, it is very difficult for trained intelligence officers from both the United States and the PRC to successfully infiltrate the academic environment due to the level of academic training required and fluency in domain-specific technical terms and practice.  When the United States occupies a position of preeminence, and the primary sources of technical knowledge originate in American laboratories, then it makes sense to limit partnerships with individuals and organizations who have competing interests.  Unfortunately, the United States cannot count on itself being in the lead in many scientific domains any longer.  Our ability to avoid technological surprise depends, in part, on knowing what’s going on in Chinese labs.

The research security push over the past decade makes this much more difficult.  The growing absence of routine technical interactions with Chinese counterparts means that our scientists and engineers have limited insight into the PRC’s work.  The emphasis of research security requirements aims to restrict interactions with the PRC’s Seven Sons of National Defense, which also happen to be the PRC’s top performers in science and technology discovery.  Without knowledge of what is happening in these institutions or the ability of our scientific community to ground-truth intelligence assessments (which are frequently compiled by non-experts), we increase the probability that we will not have the immediate knowledge necessary to replicate such discoveries or worse, fall victim to technological surprise.

Degraded Capacity Creates Larger Competitiveness Risks

The challenge is compounded when we consider the degradation of our own technical capacity.  In fields like nuclear fusion and radio astronomy, the United States simply does not have the laboratory capabilities that are necessary for our scientists to remain at the cutting edge. In 2023, the Fusion Energy Science Advisory Committee found that “rapid progress toward commercial fusion power will likely rely in part on research at existing or near-term international facilities that provide capabilities presently unavailable in the U.S., such as long-pulse magnetic confinement” – an area where the PRC has invested billions of dollars in recent years.  Decoupling and fear of information transfer also has limited our ability to communicate with the PRC in areas like spaceflight safety, where we generate dozens of conjunction warnings every day involving potential near-misses with Chinese satellites.  The consequences of non-communication or inefficient communication with PRC satellite operators resulting from a collision cascade includes the loss of access to particular orbits, significant damage to U.S. and allied commercial and security assets, and potentially unlimited liabilities for the United States under the Outer Space Treaty.  Such a disaster would have dire consequences for our economy and space-dependent national security establishment.

This creates a Catch-22 for American scientists and engineers who want to see the United States advance in the field or see their fields advance.  On the one hand, testing new technologies on foreign platforms creates substantial risk of foreign appropriation, similar to the risks experienced by American companies operating in China since normalization.  On the other, the lack of access to similar domestic or allied capabilities freezes the ability of U.S. commercial interests to keep pace with foreign competitors. 

Increased Departures, Stunted Discovery Process 

The U.S. government, likewise, cannot count on individuals who have devoted their lives to a particular field of inquiry to wait for the government to invest in these capabilities in the next five to ten years (an overly optimistic projection for the budget process, alone). History shows us that brain drain (or loss) in such situations is inevitable.  America’s ability to remain competitive will increasingly depend in some part on our access to PRC-derived knowledge sources absent new and significant U.S. government investment.  A research security regime that unnecessarily restricts U.S. collaborations based on institutional affiliation or connections, rather than domains where the risk of loss exceeds potential benefits, is more likely than not to limit our insight into new knowledge early in the discovery process, and throttle U.S. technological development in a way that will make it more difficult to compete in the future.

Research integrity has become securitized, confusing the ways in which we approach domestic challenges

Research integrity and research security are interlinked, and it is absolutely true that when contracts developed by a foreign government encourage researchers to lie on federal grant applications that is a major cause for concern and demands a national-level response.  But no matter how egregious such behavior is, and no matter how proximate violations of the False Claims Act are to violations of the Espionage Act, they are different parts of the criminal code.  If the government wishes to bring a case before the courts where an individual has both lied to the government on official documents and provided NSI to a foreign government, it is in the power of a prosecutor to do so and seek appropriate penalties.

On the domestic side, organizations such as Retraction Watch have justifiably brought attention to research misconduct over the past ten years.  There are significant examples of senior researchers, including 13 retractions from a Nobel laureate, of having engaged in alleged violations of research integrity.  Many of these cases involve the knowing manipulation or falsification of data.  I would argue that such cases do more to damage the integrity of the U.S. research ecosystem than individuals omitting information about their affiliations in grant applications, primarily because the publication of falsehoods into the scientific corpus, exacerbating the “replication crisis” in biology, psychology, and behavioral economics, and poisoning the scientific establishment’s credibility.

In several recent domestic misconduct cases, the researchers in question were able to return to their laboratories, continue their day jobs, and even found unicorn technology startups.  On the other hand, undisclosed affiliations, patents, or grant support have resulted in the termination of 119 scientists (almost half) investigated by the National Institutes of Health as part of their foreign interference efforts.  Some researchers have been denied access to funding despite no credible link to foreign talent programs or other similarly concerning affiliations.  A reasonable observer would be left to wonder why an undisclosed patent or grant support is a greater violation than the intentional falsification of the results of their research.  To the observer the answer may seem obvious–in one instance, we have a nexus with a foreign government; in the other instance, we don’t.  

As former Assistant Attorney General Matt Olsen noted when the Department of Justice drew down the China Initiative in 2022, “by grouping cases under the China Initiative rubric, we helped give rise to a harmful perception that the department applies a lower standard to investigate and prosecute criminal conduct related to that country or that we in some way view people with racial, ethnic or familial ties to China differently.”  The same must be true of administrative actions related to academic misconduct, especially as countersuits undermine the government’s earlier claims.  

Proper NSI Identification Removes Ambiguity, Releases Burden

This is why it is important that national security-sensitive research be clearly identified as NSI through the classification system.  Once information has been appropriately classified, then who gets to handle the information becomes just as important as how the information is handled.  Individuals who handle classified information are expected to report their contacts, affiliations with foreign governments, and all other information that is necessary to maintain the public trust as a matter of national security.  Violations related to improper handling may be appropriately elevated under the Espionage Act and successfully prosecuted in court.  Our current attempts to substitute judicial action with administrative penalties creates a culture of fear and suspicion in our university establishment, rhyming with some of the darkest periods in the history of our nation’s research and development enterprise.  The culture of fear, compounded with a lack of clarity about what needs to be protected, dramatically increases the probability that universities will improperly exclude individuals from certain racial or ethnic backgrounds on activities that have little relevance to national security.

Plan of Action 

Recommendation 1.  Congress should, with the support of the Administration, reinvest in American research and innovation, including in foreign talent attraction.

The United States is no longer in a position where American technological superiority is assured; the fact that the PRC is leading in publications in many critical and emerging technology fields, and in the deployment of advanced technologies in certain domains, should be of immediate cause for concern.  Likewise, anticipated declines in foreign talent enrollment in U.S. universities and shrinking PhD programs at major U.S. universities in response to federal policy actions should be of significant cause for concern. In 2024, the Defense Department reported that “unfunded research, development, test, and evaluation (RDT&E) infrastructure requirements were shown to have grown significantly since annual reporting began in 2018, putting the military at risk of losing its technological superiority.”

Funding also provides the government with the leverage needed to protect American interests.  The Department of Defense’s use of contracts to protect U.S. security interests is already common practice, most famously recently used to ensure that SpaceX’s Starlink would not cut service in Ukraine. Similarly, the government cannot classify, restrict, or otherwise currently control information that is produced outside the federal research ecosystem. Even if it is possible that an idea may make it back to a foreign government or adversarial military, it is far better for the United States to have participated in the development of that idea (and have the chance to exploit relevant intellectual property) than it is to surrender the innovation, in total, to a foreign power, robbing us of the chance to compete. Sometimes, in order to control the dissemination of an idea, the government must invest in it, and that means that the government should actively seek to invest in projects in order to participate in the discovery and gain access to its benefits, even when our partners might be less than ideal or when there is risk that the information could also be used by a hostile foreign power.

The government can and should create terms in its funding mechanisms that enable it to mitigate risk where necessary, as well as giving it graduated negotiating flexibility in circumstances where we may require enhanced measures (like encouraging researchers to sever ties with adversarial entities like the PRC).  These should not be mandatory, as is the case with the SAFE Research Act, which effectively gives the PRC the power to veto U.S. participation in large multinational consortia if individual PRC researchers become affiliated with an effort, thereby reducing the ability of the United States to set terms for increasingly multilateral scientific activities.

Recommendation 2. The Office of Management and Budget (OMB) should require rigorous and independent cost-benefit analysis of existing research security efforts before approving new research security-related regulations and requirements as part of the Paperwork Reduction Act review process.  Congress should exercise similar care before imposing similar requirements on academic institutions given the significant costs associated with research security programs.

The use of grants to impose research security requirements that alter an organization’s behavior at the institutional level are, in effect, de-facto regulatory action with significant economic impact (as defined in Executive Order 12866).  As attesting that an institution has a research security program that meets the requirements of a granting agency requires significant economic expenditure across many academic institutions, new research security requirements should be subject to the same cost-benefit analysis as other significant regulatory actions.  Paired with declining support for indirect costs and limited public evidence that disclosures of federally-supported basic research has harmed U.S. national security interests, there is a financial imperative to ensure that research security programs are aligned with actual instances of observed harm to national security and the unintended or illicit transfer of national security information (NSI).

Consistent with recommendations from the recent National Academies report on simplifying research regulations and policies, existing research security efforts should be recalibrated to mitigate risk of loss as opposed to blanket bans on interactions as a result of institutional affiliation.  Efforts that result in reduced American participation in global discovery around critical technologies directly cut against our country’s technological competitiveness and should be retired and rescinded, as the most likely outcome of such measures is not the protection of information, but rather the inability of the United States to develop and deploy new and novel capabilities.

What could the costs for expanded research security programs look like, especially given that some agencies are allegedly treating basic research activities as CUI?  As noted in the JASON report on Safeguarding the Research Enterprise:

“The supporting apparatus for access controls would impose significant cost on the conduct of research and reduce research funding efficiency. JASON received from NSF cost estimates for what the University of Oklahoma has spent to support such work, for example. A warehouse-type building for CUI experiments was estimated to have cost $2M, and a new office building with access control adequate for classified work cost $7M. Building construction costs are only about 10–20% of their life-cycle ownership costs, translating to roughly $1–2M per year for both buildings. Required security and compliance staff add cost of four full-time equivalent personnel, equating to another $1M per year. Thus, a medium to large ($1–3M/year) research program might incur security costs around $1–3M per year above the baseline research cost, roughly doubling the cost of carrying out that research. This would constitute a serious loss of research efficiency. Slowing research by half could easily allow countries like the People’s Republic of China (PRC) to pull ahead in strategic fundamental research areas.”

Strangling our research enterprise is an unacceptable outcome and must be avoided.  Likewise, expanding research security reviews within agencies would likely require significant expansion of agency personnel available to conduct rigorous national security assessments and cut against the availability of funds to do actual science.  Such a tax on research funding is an additional cost to American competitiveness, reduces the number of grants made to institutions, and serves to further exclude institutions in EPSCoR jurisdictions, HBCUs, and other MSIs, emergent research institutions, and nonprofit laboratories that already face significant resource constraints.  

In light of preliminary assessments of cost to the research enterprise, the Government Accountability Office (GAO, which is within the legislative branch) may wish to also assess the effectiveness of research security programs compared with their cost of implementation, and in particular the impact on programs that do not have a clear NSI nexus.

The Administration and Congress must ask if they are prepared to significantly increase funding for research programs in order to deal with such significant cost inflation due to mandatory research security program requirements.  The situation will only become more dire if the government caps indirect costs at 15 percent, as several science agencies recently attempted to impose, which would place research security costs in even greater conflict with the fundamental resources necessary for an institution to conduct research.  Such security measures are meaningless if the research institution lacks the staffing, facilities, or administrative support necessary to conduct cutting-edge research.  Care should be taken to ensure that federal policy actions attempting to reign in administrative costs do not cut against our ability to operate world-class research institutions.

Recommendation 3. Congress should figure out how best to handle research security concerns that originate from research supported by non-governmental entities, and empower industry consortia to take measures to secure their sensitive intellectual property.  Incentivizing participation through contracts, cost deferrals, or reduction of administrative burden for protective measures can help.

NSDD-189 warns that “as the emerging government-university-industry partnership in research activities continues to grow, a more significant problem may well develop.”  Contemporaries involved in the drafting of NSDD-189 have told me that the growing role of the private sector in funding university research was of concern to Reagan’s OSTP, and that as industry became a more prominent funder, then the government would have limited controls to prevent the transfer of industry-derived information to foreign powers.  Given the increased presence of commercial research laboratories, and more recently focused research organizations, the emphasis of research security efforts on federal funding for universities is inconsistent with the balance of U.S. research performance. 

The majority of research considered proprietary or sensitive (as well as the overwhelming majority of all U.S. research) is produced or supported by American companies.  As a matter of policy, most universities will not accept controls on the research which they produce, appropriately intending for it to enter into the public domain where it can be more rapidly developed and used for practical benefit.  Academia’s role as a producer and broad distributor of knowledge is fundamental to its role in society and must be protected.  Rather than suggesting that most university research is sensitive, when its primary purpose is to be shared, Congress should be willing to enact enforcement mechanisms that strengthen the ability of the government to protect research relevant to defense and industry.

On this point there are no easy answers.  The danger, of course, is that such measures will inherently restrict the competitiveness of American businesses and their ability to participate in international markets.  Strengthening CFIUS can hinder the ability of American companies to secure sources of investment and invite foreign retaliation.  Expanding our use of export controls will create incentives for foreign governments to diversify away from American suppliers and limit the ability of our companies to shape global supply chains.  Restricting foreign countries’ access to U.S. technology will instead create new dependencies on foreign technology sources, creating a new system of incentives that limit the ability of the United States to set global standards, and force countries to make concessions to the PRC in order to maintain access to their resources and technology.  Our efforts to limit the PRC’s access to semiconductors has already convinced the European Union of the need to become less dependent on the United States in other technology areas.  The difficulty the United States has had in managing the spread of technology from Huawei, Bytedance, and BYD should be instructive, as should growing challenges to U.S. influence in multilateral fora.

The most straightforward way to control information produced outside the government is by getting knowledge producers on contract and creating a system of incentives for companies to do so.  Doing so has two purposes.  First, it allows the government to create a financial incentive for companies to participate in enhanced security measures.  Second, it creates a legal relationship where companies can be required to implement enhanced security measures without significant sacrifices to their bottom line.  This could be through offering to defer costs associated with security clearances and technology measures needed to provide enhanced security and IP protection.  Legal relationships are more effective than education campaigns, given that some producers of high value technology don’t seek government protection given the significant financial and administrative costs.  Still, some firms may refuse to take government grants or contracts to preserve their organizational flexibility.  This is a feature of American capitalism, not a bug.

Managing these tradeoffs is fundamental to addressing our research security challenges.  If we want to get serious about addressing research security, our greatest efforts need to be directed to research that is higher in the value chain with national security consequences.  Most of that research doesn’t happen in universities.  Policymakers must be prepared to accept the inherent tradeoffs that come with decisions to place restrictions on the American research enterprise.

Recommendation 4. The Executive Branch should grant Original Classification Authority to the heads of extramural granting agencies and more strictly apply the federally-recognized definitions of basic and fundamental research.

As federal extramural funding agencies move toward more technology-relevant solutions and later-stage technology development and deployment, the ability of agencies to defend decisions to keep research in the open environment will become much more acute.  While the risks of overclassification are real, the risks are not greater than in fields like diplomatic engagement or global development (both by definition require continuous engagement with foreign partners, especially with those who wish to challenge American interests). The Department of Defense and Intelligence Advanced Research Projects Activity (IARPA) experience and history of prudent and limited classification determinations should provide some relief.  

This will not be cheap, and agencies will need appropriate resourcing in terms of personnel and capital to manage the workload and specialized technology facilities (Sensitive Compartmented Information Facilities, or SCIFs, and secure work areas) to manage classified information.  

More strictly applying the federal definition of basic research to research that is “without specific applications or processes in mind” should help agencies separate activities that have clear industrial, commercial, or defense-related applications from research that is truly foundational.  The task of establishing a “bright line” between applications with national security potential is likely to be significant, but it is likely to be far less costly than passing that cost onto thousands of laboratories around the country to make their own risk management calculations based on less complete information.  NSF has created the SECURE Center to mitigate this, but again, the solicitation for the Center notes that the Center does not conduct investigations, hold or manage classified information, or assume liability for the consequences of its products.

Agencies will also need to deal with the fact that many universities and other non-governmental research organizations frequently do not accept funding that comes with classification, CUI, or other burdensome requirements. In such instances agencies will need to weigh the value of the research in question and balance that with the risk of foreign appropriation.  In instances where the risk of foreign appropriation is significant, the value of the research is high, and the consequences of the United States not participating in discovery are anticipated to be significant, agencies should be willing to provide supplemental funding to enable the research activity to take place and to devote an appropriate level of oversight to ensure that U.S. involvement in the activity is appropriately managed and consistent with overarching federal interests.

The government might also wish to implement alternative funding measures when there is an interest in participating in discovery while excluding the participation of non-aligned governments.  Using the contract or collaborative agreement systems are probably more appropriate than grants in such circumstances, allowing the government to more expressly dictate terms for collaboration.  The government should be willing to use financial leverage to do this, including through funding for large international research consortia, to counter the efforts of adversarial governments while maintaining the ability of the United States to participate in overseas discovery processes.

Yes, this will result in a culture change in some of our premier research-supporting agencies. I would argue that culture change became necessary immediately after the creation of the NSF TIP Directorate, ARPA-H, and similar organizations.  If we expect these agencies to engage more directly in critical and emerging technology development, especially around technologies which are relevant to great power competition, then increased scrutiny of these and similar programs is necessary to protect our critical national assets.

Recommendation 5. The Executive Branch should move all research integrity efforts under the rubric of Gold Standard Science along with other issues related to academic misconduct.

This is not a lengthy recommendation; the integrity of our efforts to preserve and protect the integrity of the research enterprise should be managed under a single umbrella where penalties for unethical conduct can be calibrated to the severity of the misconduct.  This is essential for maintaining buy-in within the research community around both research security and research integrity measures.  Invoking Justice Holmes, if we take the view of an unscrupulous researcher (whom we shall find does not care two straws for ethical conduct in the sciences but does want to know what will result in a loss of tenure) then it is reasonable for them to assume that the government’s ethical framework for science is mediated primarily by our relationship with Chinese research institutions as opposed to Gold Standard Science. It is in the interest of the science and technology ecosystem to correct that notion at the earliest possible opportunity (and presumably to also clarify when the government will charge them with espionage).

Conclusion

It is highly probable that this administration and Congress will seek to implement additional measures related to research security in the near term.  Current and proposed frameworks incorrectly assume persistent U.S. supremacy in science and technology and that new ideas are a product of government innovation.  As the PRC deploys new capabilities that have not been demonstrated by U.S. government or commercial actors, the posture of the United States toward our research security apparatus must change to match the strategic moment.  Measures that isolate the United States from discovery should be retired in favor of new measures that selectively identify and protect critical knowledge vital to maintaining national security.  For the sake of our security and future technological leadership, we must recognize that innovation comes from the work of teams of individuals who are motivated to change the world, and accept that America is less secure when that change happens somewhere else.