Trust Issues: An Analysis of NSF’s Funding for Trustworthy AI
Below, we analyze AI R&D grants from the National Science Foundation’s Computer and Information Science and Engineering (NSF CISE) directorate, estimating those supporting “trustworthy AI” research. NSF hasn’t offered an overview of specific funding for such studies within AI. Through reviewing a random sample of granted proposals 2018-2022, we estimate that ~10-15% of annual AI funding supports trustworthy AI research areas, including interpretability, robustness, privacy-preservation, and fairness, despite an increased focus on trustworthy AI in NSF’s strategic plan as well as public statements by key NSF and White House officials. Robustness receives the most allocation (~6% annually), while interpretability and fairness each obtain ~2%. Funding for privacy-preserving machine learning has seen a significant rise, from .1% to ~5%. We suggest NSF increases funding towards responsible AI, incorporating specific programs and solicitations addressing critical AI trustworthiness issues. We also clarify that NSF should consider trustworthiness in all AI grant application assessments and prioritize projects enhancing the safety of foundation models.
Background on Federal AI R&D
Federal R&D funding has been critical to AI research, especially a decade ago when machine learning (ML) tools had less potential for wide use and received limited private investment. Much of the early AI development occurred in academic labs that were mainly federally funded, forming the foundation for modern ML insights and attracting large-scale private investment. With private sector investments outstripping public ones and creating notable AI advances, federal funding agencies are now reevaluating their role in this area. The key question lies in how public investment can complement private finance to advance AI research that is beneficial for American wellbeing.
The Growing Importance of Trustworthy AI R&D
A growing priority within the discourse of national AI strategy is the advancement of “trustworthy AI”. Per the National Institutes of Standards and Technology, Trustworthy AI refers to AI systems that are safe, reliable, interpretable, robust, demonstrate respect for privacy, and have harmful biases mitigated. Though terms such as “trustworthy AI”, “safe AI”, “responsible AI”, and “beneficial AI” are not precisely defined, they are an important part of the government’s characterization of high-level AI R&D strategy. We aim to elucidate these concepts further in this report, focusing on specific research directions aimed at bolstering the desirable attributes in ML models. We will start by discussing an increasing trend we observe in governmental strategies and certain program solicitations emphasizing such goals.
This increased focus has been reflected in many government strategy documents in recent years. Both the 2016 National AI R&D Strategic Plan and its 2019 update from the National Science and Technology Council pinpointed trustworthiness in AI as a crucial objective. This was reiterated even more emphatically in the recent 2023 revision, which stressed ensuring confidence and reliability of AI systems as especially significant objectives. The plan also underlined how burgeoning numbers of AI models have necessitated urgent efforts towards enhancing safety parameters in AIs. Public feedback regarding previous versions of this plan highlight an expanded priority across academia, industry and society at large for AI models that maintain safety codes, transparency protocols, and equitable improvements without trespassing privacy norms. The NSF’s FY2024 budget proposal submission articulated its primary intention in advancing “the frontiers of trustworthy AI“, deviating from earlier years’ emphasis on sowing seeds for future advancements across various realms of human pursuits.
Concrete manifestations of this increasing emphasis on trustworthy AI can be seen not only in high-level discussions of strategy, but also through specific programs designed to advance trustworthiness in AI models. One of the seven new NSF AI institutes established recently focuses exclusively on “trustworthy AI“. Other programs like NSF’s Fairness in Artificial Intelligence and Safe-Learning Enabled Systems focus chiefly on cultivating dimensions of trustworthy AI research.
Despite their value, these individual programs focused on AI trustworthiness form only a small fragment of total funding allocated for AI R&D by the NSF; at around $20 million per year against nearly $800 million per year in funding towards AI R&D. It remains unclear how much this mounting concern surrounding trustworthy and responsible AI influences NSF’s funding commitments towards responsible AI research. In this paper, we aim to provide an initial investigation of this question by estimating the proportion of grants over the past five fiscal years (FY 2018-2022) from NSF’s CISE directorate (the primary funder of AI R&D within NSF) which support a few key research directions within trustworthy AI: interpretability, robustness, fairness, and privacy-preservation.
Please treat our approximations cautiously; these are neither exact nor conclusive responses to this question. Our methodology heavily relies upon individual judgments categorizing nebulous grant types within a sample of the overall grants. Our goal is to offer an initial finding into federal funding trends directed towards trustworthy AI research.
Methodology
We utilized NSF’s online database of granted awards from the CISE directorate to facilitate our research. Initially, we identified a representative set of AI R&D-focused grants (“AI grants”) funded by NSF’s CISE directorate across certain fiscal years 2018-2022. Subsequently, we procured a random selection of these grants and manually classified them according to predetermined research directions relevant to trustworthy AI. An overview of this process is given below, with details on each step of our methodology provided in the Appendix.
- Search: Using NSF’s online award search feature, we extracted a near comprehensive collection of abstracts of grant applications approved by NSF’s CISE directorate during fiscal years 2018-2022. Since the search function relies on keywords, we focused on high recall in the search results over high precision, leading to an overly encompassing result set yielding close to 1000 grants annually. It is believed that this initial set encompasses nearly all AI grants from NSF’s CISE directorate while also incorporating numerous non-AI-centric R&D awards.
- Sample: For each fiscal year, a representative random subset of 100 abstracts was drawn (approximating 10% of the total abstracts extracted). This sample size was chosen as it strikes a balance between manageability for manual categorization and sufficient numbers for reasonably approximate funding estimations.
- Sort: Based on prevailing definitions of trustworthy AI, four clusters were conceptualized for research directions: i) interpretability/explainability, ii) robustness/safety, iii) fairness, iv) privacy-preservation. To furnish useful contrasts with trustworthy AI funding numbers, additional categories were designated: v) capabilities and vi) applications of AI. Herein, “capabilities” corresponds to pioneering initiatives in model performance and “application of AI” refers to endeavors leveraging extant AI techniques for progress in other domains. Non-AI-centric grants were sorted out of our sample and marked as “other” in this stage. Each grant within our sampled allotment was manually classified into one or more of these research directions based on its primary focus and possible secondary or tertiary objectives where applicable—additional specifics regarding this sorting process are delineated in the Appendix.
Findings
Based on our sorting process, we estimate the proportion of AI grant funds from NSF’s CISE directorate which are primarily directed at our trustworthy AI research directions.
As depicted in Figure 2, the collective proportion of CISE funds allocated to trustworthy AI research directions usually varies from approximately 10% to around 15% of the total AI funds per annum. However, there are no noticeable positive or negative trends in this overall metric, indicating that over the five-year period examined, there were no dramatic shifts in the funding proportion assigned to trustworthy AI projects.
Considering secondary and tertiary research directions
As previously noted, several grants under consideration appeared to have secondary or tertiary focuses or seemed to strive for research goals which bridge different research directions. We estimate that over the five-year evaluation period, roughly 18% of grant funds were directed to projects having at least a partial focus on trustworthy AI.
Specific Research Directions
Robustness/safety
Presently, ML systems tend to fail unpredictably when confronted with situations considerably different from their training scenarios (non-iid settings). This failure propensity may induce detrimental effects, especially in high-risk environments. With the objective of diminishing such threats, robustness or safety-related research endeavors aim to enhance system reliability across new domains and mitigate catastrophic failure when facing untrained situations.1 Additionally, this category encompasses projects addressing potential risks and failure modes identification for further safety improvements.
Over the past five years, our analysis shows that research pertaining to robustness is typically the most funded trustworthy AI direction, representing about 6% of the total funds allocated by CISE to AI research. However, no definite trends have been identified concerning funding directed at robustness over this period.
Interpretability/explainability
Explaining why a machine learning model outputs certain predictions for a given input is still an unsolved problem.2 Research on interpretability or explainability aspires to devise methods for better understanding the decision-making processes of machine learning models and designing more easily interpretable decision systems.
Over the investigated years, funding supporting interpretability and explainability doesn’t show substantial growth, averagely accounting for approximately 2% of all AI funds.
Fairness/non-discrimination
ML systems often reflect and exacerbate existing biases present in their training data. To circumvent these issues, research focusing on fairness or non-discrimination purposes works towards creating systems that sidestep such biases. Frequently this area of study involves exploring ways to reduce dataset biases and developing bias-assessment metrics for current models along with other bias-reducing strategies for ML models.3
The funding allocated to this area also generally accounts for around 2% of annual AI funds. Our data did not reveal any discernible trend related to fairness/non-discrimination orientated fundings throughout the examined period.
Privacy-preservation
AI systems training typically requires large volumes of data that can include personal information; therefore privacy preservation is crucial. In response to this concern, privacy-preserving machine learning research aims at formulating methodologies capable of safeguarding private information.4
Throughout the studied years, funding for privacy-preserving machine learning exhibits significant growth from under 1% in 2018 (the smallest among our examined research directions) escalating to over 6% in 2022 (the largest among our inspect trustworthy AI research topics). This increase flourishes around fiscal year 2020; however, its cause remains indeterminate.
Recommendations
NSF should continue to carefully consider the role that its funding can play in an overall AI R&D portfolio, taking into account both private and public investment. Trustworthy AI research presents a strong opportunity for public investment. Many of the lines of research within trustworthy AI may be under-incentivized within industry investments, and can be usefully pursued by academics. Concretely, NSF could:
- Build on its existing work by introducing more focused programs and solicitations for specific problems in trustworthy AI, and scaling these programs to be a significant fraction of its overall AI budget.
- Include the consideration of trustworthy and responsible AI as a component of the “broader impacts” for NSF AI grants. NSF could also consider introducing a separate statement for every application for funding for an AI project, which explicitly asks researchers to identify how their project contributes to trustworthy AI. Reviewers could be instructed to favor work which offers potential benefits on some of these core trustworthy AI research directions.
- Publish a Dear Colleague Letter (DCL) inviting proposals and funding requests for specific trustworthy AI projects, and/or a DCL seeking public input on potential new research directions in trustworthy AI.
- NSF could also encourage or require researchers to follow the NIST AI RMF when conducting their research.
- In all of the above, NSF should consider a specific focus on supporting the development of techniques and insights which will be useful in making large, advanced foundation models, such as GPT-4, more trustworthy and reliable. Such systems are advancing and proliferating, and government funding could play an important role in helping to develop techniques which proactively guard against risks of such systems.
Appendix
Methodology
For this investigation, we aim to estimate the proportion of AI grant funding from NSF’s CISE directorate which supports research that is relevant to trustworthy AI. To do this, we rely on publicly-provided data of awarded grants from NSF’s CISE directorate, accessed via NSF’s online award search feature. We first aim to identify, for each of the examined fiscal years, a set of AI-focused grants (“AI grants”) from NSF’s CISE directorate. From this set, we draw a random sample of grants, which we manually sort into our selected trustworthy AI research directions. We go into more detail on each of these steps below.
How did we choose this question?
We touch on some of the motivation for this question in the introduction above. We investigate NSF’s CISE directorate because it is the primary directorate within NSF for AI research, and because focusing on one directorate (rather than some broader focus, like NSF as a whole) allows for a more focused investigation. Future work could examine other directorates within NSF or other R&D agencies for which grant awards are publicly available.
We focus on estimating trustworthy AI funding as a proportion of total AI funding, with our goal being to analyze how trustworthy AI is prioritized relative to other AI work, and because this information could be more action-guiding for funders like NSF who are choosing which research directions within AI to prioritize.
Search (identifying a list of AI grants from NSF’s CISE Directorate)
To identify a set of AI grants from NSF’s CISE directorate, we used the advanced award search feature on NSF’s website. We conducted the following search:
- For the NSF organization window, we selected “CSE – Direct for Computer & Info Science”
- For “Keyword”, we entered the following list of terms:
- AI, “computer vision”, “Artificial Intelligence”, “Machine Learning”, ML, “Natural language processing”, NLP, “Reinforcement learning”, RL
- We included both active and expired awards.
- We set the range for each search to capture the fiscal years of interest (e.g. 10/01/2017 to 09/30/2018 for FY18, 10/01/2018 to 9/30/2019 for FY19, and so on).
This search yielded a set of ~1000 grants for each fiscal year. This set of grants was over-inclusive, with many grants which were not focused on AI. This is because we aimed for high recall, rather than high precision when choosing our key words; our focus was to find a set of grants which would include all of the relevant AI grants made by NSF’s CISE directorate. We aim to sort out false positives, i.e. grants not focused on AI, in the subsequent “sorting” phase.
Sampling
We assigned a random number to each grant returned by our initial search, and then sorted the grants from smallest to largest. For each year, we copied the 100 grants with the smallest randomly assigned numbers and into a new spreadsheet which we used for the subsequent “sorting” step.
We now had a random sample of 500 grants (100 for each FY) from the larger set of ~5000 grants which we identified in the search phase. We chose this number of grants for our sample because it was manageable for manual sorting, and we did not anticipate massive shifts in relative proportions were we to expand from a ~10% sample to say, 20% or 30%.
Identifying Trustworthy AI Research Directions
We aimed to identify a set of broad research directions which would be especially useful for promoting trustworthy properties in AI systems, which could serve as our categories in the subsequent manual sorting phase. We consulted various definitions of trustworthy AI, relying most heavily on the definition provided by NIST: “characteristics of trustworthy AI include valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.” We also consulted some lists of trustworthy AI research directions, identifying research directions which appeared to us to be of particular importance for trustworthy AI. Based on the above process, we identify the following clusters of trustworthy AI research:
- Interpretability/explainability
- Privacy-preserving machine learning
- Robustness/safety
- Fairness and non-discrimination
It is important to note here that none of these research areas are crisply defined, but we thought that these clusters provided a useful, high-level, way to break trustworthy AI research down into broad categories.
In the subsequent steps, we aim to compare the amount of grant funds that are specifically aimed at promoting the above trustworthy AI research directions with the amount of funds which are directed towards improving AI systems’ capabilities in general, or simply applying AI to other classes of problems.
Sorting
For our randomly sampled set of 500 grants, we aimed to sort each grant according to its intended research direction.
For each grant, we a) read the title and the abstract of the grant and b) assigned the grant a primary research direction, and if applicable, a secondary and tertiary research direction. Secondary and tertiary research directions were not selected for each grant, but were chosen for some grants which stood out to us as having a few different objectives. We provide examples of some of these “overlapping” grants below.
We sorted grants into the following categories:
- Capabilities
- This category was used for projects that are primarily aimed at advancing the capabilities of AI systems, by making them more competent at some task, or for research which could be used to push forward the frontier of capabilities for AI systems.
- This category also includes investments in resources that are generally useful for AI research, e.g. computing clusters at universities.
- Example: A project which aims to develop a new ML model which achieves SOTA performance on a computer vision benchmark.
- Application of AI/ML.
- This category was used for projects which apply existing ML/AI techniques to research questions in other domains.
- Example: A grant which uses some machine learning techniques to analyze large sets of data on precipitation, temperature, etc. to test a hypothesis in climatology.
- Interpretability/explainability.
- This category was used for projects which aim to make AI systems more interpretable or explainable, by allowing for a better understanding of their decision-making process. Here, we included both projects which offer methods for better interpreting existing models, and also on projects which offer new training methods that are easier to interpret.
- Example: A project which determines the features of a resume that make it more or less likely to be scored positively by a resume-ranking algorithm.
- Robustness/safety
- This category was used for projects which aim to make AI systems more robust to distribution shifts and adversarial inputs, and more reliable in unfamiliar circumstances. Here, we include both projects which introduce methods for making existing systems more robust, and those which introduce new techniques that are more robust in general.
- Example: A project which explores new methods for providing systems with training data that causes a computer vision model to learn robustly useful patterns from data, rather than spurious ones.
- Fairness/non-discrimination
- This category was used for projects which aim to make AI systems less likely to entrench or reflect harmful biases. Here, we focus on work directly geared at making models themselves less biased. Many project abstracts described efforts to include researchers from underrepresented populations in the research process, which we chose not to include because of our focus on model behavior.
- Example: A project which aims to design techniques for “training out” certain undesirable racial or gender biases.
- Privacy preservation
- This category was used for projects which aim to make AI systems less privacy-invading.
- Example: A project which provides a new algorithm that allows a model to learn desired behavior without using private data.
- Other
- This category was used for grants which are not focused on AI. As mentioned above, the random sample included many grants which were not AI grants, and these could be removed as “other.”
Some caveats and clarifications on our sorting process
This sorting focuses on the apparent intentions and goals of the research as stated in the abstracts and titles, as these are the aspects of each grant the NSF award search feature makes readily viewable. Our process may therefore miss research objectives which are outlined in the full grant application (and not within the abstract and title).
A focus on specific research directions
We chose to focus on specific research agendas within trustworthy and responsible AI, rather than just sorting grants between a binary of “trustworthy” or “not trustworthy” in order to bring greater clarity to our grant sorting process. We still make judgment calls with regards to which individual research agendas are being promoted by various grants, but we hope that such a sorting approach will allow greater agreement.
As mentioned above, we also assigned secondary and tertiary research directions to some of these grants. You can view the grants in the sample and how we sorted each here. Below, we offer some examples of the kinds of grants which we would sort into these categories.
Examples of Grants with Multiple Research Directions
- Primary: Capabilities, Secondary: Application of ML.
- A project which aims to introduce a novel ML approach that is useful for making progress on a research problem in another domain would be categorized as having a primary purpose of Capabilities and a Secondary purpose of Application of ML.
- Primary: Application of ML, Secondary: Capabilities
- This is similar to the above, except that the “application” seems more central to the research objective than the novel capabilities do. Of course, this weighing of which research objectives were most and least central is subjective and ultimately our decisions on many were judgment calls.
- Primary: Capabilities, Secondary: Interpretability
- A project which introduces a novel method that achieves better performance on some benchmark while also being more interpretable.
- Primary: Interpretability, Secondary: Robustness
- A project which aims to introduce methods for making AI systems both more interpretable and more robust.
To summarize: in the sorting phase, we read the title and abstract of each grant in our random sample, and assigned these grants to a research direction. Many grants received only a “primary” research direction, though some received secondary and tertiary research directions as well. This sorting was based on our understanding of the main goals of the project, based on the description provided by the project title and abstract.
FAS is launching this live blog post to track all proposals around artificial intelligence (AI) that have been included in the NDAA.
With U.S. companies creating powerful frontier AI models, the federal government must guide this technology’s growth toward public benefit and risk mitigation. Here are six ways to do that.