A Fair Artificial Intelligence Research & Regulation (FAIRR) Bureau
Summary
Artificial intelligence (AI) is transforming our everyday reality, and it has the potential to save or to cost lives. Innovation is advancing at a breakneck pace, with technology developers engaging in de facto policy-setting through their decisions about the use of data and the embedded bias in their algorithms. Policymakers must keep up. Otherwise, by ceding decision-making authority to technology companies, we face the rising threat of becoming a technocracy. Given the potential benefits and threats of AI to US national security, economy, health, and beyond, a comprehensive and independent agency is needed to lead research, anticipate challenges posed by AI, and make policy recommendations in response. The Biden-Harris Administration should create the Fair Artificial Intelligence Research & Regulation (FAIRR) Bureau, which will bring together experts in technology, human behavior, and public policy from all sectors – public, private, nonprofit, and academic – to research and develop policies that enable the United States to leverage AI as a positive force for national security, economic growth, and equity. The FAIRR Bureau will adopt the interdisciplinary, evidence-based approach to AI regulation and policy needed to address this unprecedented challenge.
A National Program for Building Artificial Intelligence within Communities
Summary
While the United States is a global leader in Artificial Intelligence (AI) research and development (R&D), there has been growing concern that this may not last in the coming decade. China’s massive, state-based tech-investment schemes have catapulted the country to the status of a true competitor over the development and export of AI technologies. In response, there have been repeated calls as well as actions by the Federal Government to step up its funding of fundamental and defense AI research. Yet, maintaining our status as a global leader in AI will require not only a focus on fundamental and defense research. As a matter of domestic policy, we must also attend to the growing chasm that increasingly separates advances in state-of-the-art AI techniques from effective and responsible adoption of AI across American society and economy.
To address this chasm, the Biden-Harris Administration should establish an applied AI research program within the National Institute of Standards and Technology (NIST) to help community-serving organizations tackle the technological and ethical challenges involved in developing AI systems. This new NIST program would fill a key domestic policy gap in our nation’s AI R&D strategy by addressing the growing obstacles and uncertainty confronting AI integration, while broadening the reach of AI as a tool for economic and social betterment nationwide. Program funding would be devoted to research projects co-led by AI researchers and community-based practitioners who would ultimately oversee and operate the AI technology. Research teams would be tasked with co-designing and evaluating an AI system in light of the specific challenges faced by community institutions. Specific areas poised to benefit from this unique multi-stakeholder and cross-sectoral approach development include healthcare, municipal government, and social services.
Advancing American AI through National Public-Private Partnerships for AI Research
Summary
The Biden-Harris Administration should launch a national initiative to bring together academic and industry researchers and practitioners in a public-private partnership (PPP) to advance, at scale, the research foundations of artificial intelligence (AI) and its application in areas of economic advantage and national need. The National Public-Private Partnership in AI (NPPP-AI) Initiative would initially create 10 coordinated national AI R&D Institutes, each with 10-year lifetimes and jointly funded by industry partners and the U.S. government through its research agencies at $10M/year each (10x10x10).
NPPP-AI would accelerate future breakthroughs in AI foundations, enable a virtuous cycle between foundational and use-inspired research that would rapidly transition into practice innovations that contribute to U.S. economic and national security, as well as grow education and workforce capacity by linking university faculty and students with industry professionals, settings, and jobs.
Leveraging Machine Learning To Reduce Cost & Burden of Reviewing Research Proposals at S&T Agencies
With about $130 billion USD, the United States leads the world in federal research and development (R&D) spending. Most of this spending is distributed by science and technology agencies that use internal reviews to identify the best proposals submitted in response to competitive funding opportunities. As stewards of quality scientific research, part of each funding agency’s mission is to ensure fairness, transparency, and integrity in the proposal-review process. The selection process is a crucial aspect of ensuring that federal dollars are invested in quality research.
Manual proposal review is time-consuming and expensive, costing an estimated $2,000–$10,000 per proposal. This equates to an estimated $300 million spent annually on proposal review at the National Science Foundation alone. Yet at current proposal-success rates (between 5% and 20% for most funding opportunities), a substantial fraction of proposals reviewed are simply not competitive. We propose leveraging machine learning to accelerate the agency-review process without a loss in the quality of proposals selected and funded. By helping filter out noncompetitive proposals early in the review process, machine learning could allow substantial financial and personnel resources to be repurposed for more valuable applications. Importantly, machine learning would not be used to evaluate scientific merit—it would only eliminate the poor or incomplete proposals that are immediately and unanimously rejected by manual reviewers.
The next administration should initiate and execute a pilot program that uses machine learning to triage scientific proposals. To demonstrate the reliability of a machine-learning-based approach, the pilot should be carried out in parallel with (and compared to) the traditional method of proposal selection. Following successful pilot implementation, the next administration should convene experts in machine learning and proposal review from funding agencies, universities, foundations, and grant offices for a day-long workshop to discuss how to scale the pilot across agencies. Our vision is that machine-learning will ultimately become a standard component of proposal review across science and technology agencies, and improving the efficiency of the funding process without compromising the quality of funded research.
Challenge and Opportunity
Allocating research funding is expensive, time-consuming, and inefficient for all stakeholders (funding agencies, proposers, reviewers, and universities). The actual cost of reviewing proposals (including employee salaries and administrative expenses) has never been published by any federal funding agency. Based on our experience with the process, we estimate the cost to be between $2,000 and $10,000 per proposal, with the variation reflecting the wide range of proposals across programs and agencies. For the National Science Foundation (NSF), which reviews around 50,000 research proposals each year1, this equates to an average of $300 million spent annually on proposal review.
Multiple issues beyond cost plague the proposal-review process. These include the following:
- Decreasing proposal-success rates. This decline is attributable to a combination of an increase in the number of science, technology, engineering, and math (STEM) graduates in the United States3 and an increase in the size of average federal STEM funding awards (from $110,000 to about $130,000 in less than 10 years). Current success rates are low enough that the costs of applying for federal funding opportunities (i.e., from time spent on unsuccessful proposals) may outweigh the benefits (i.e., funding received for successful proposals).
- Difficulties recruiting qualified reviewers.
- Delayed decisions. For example, NSF takes more than six months to reach a funding decision for about 30% of proposals reviewed.
- Increasing numbers of identical re-submissions. With proposal-success rates as low as 5%, the results of selection processes are often seen as representing “the luck of the draw” rather than reflective of fundamental proposal merit. Hence there is a growing tendency for principal investigators (PIs) to simply re-submit the same proposal year after year rather than invest the time to prepare new or updated proposals.
There is a consensus is that the current state of proposal review is unsustainable.Most proposed solutions to problems summarized above are “outside” solutions involving either expanding available research funding or placing restrictions on the numbers of proposals submitted that may be submitted (by a PI or an institution). Neither option is attractive. Partisanship combined with the financial implications of COVID-19 render the possibility of an increased budget for S&T funding agencies vanishingly small. Restrictions on submissions are generally resented by scientists as a “penalty on excellence”. Incorporating machine learning could improve the efficiency and effectiveness of proposal review at little cost and without limiting submissions.
Incorporating machine learning would also align with multiple federal and agency objectives. On January 4, 2011, President Obama signed the GPRA Modernization Act of 2010. One of the purposes of the GPRA Modernization Act was to “lead to more effective management of government agencies at a reduced cost”. One of NSF’s Evaluation and Assessment Capability (EAC) goals established in response to that directive is to “create innovative approaches to assessing and improving program investment performance”. Indeed, two of the four key areas identified in NSF’s most recent Strategic Plan (2018) are to “make information technology work for us” and “Streamlining, standardizing and simplifying programs and processes.” In addition, NSF recognized the importance of reviewing its processes for efficiency and effectiveness in light of OMB memo M-17-26.9 NSF’s Strategic Plan includes a strong commitment to “work internally and with the Office of Management and Budget and other science agencies to find opportunities to reduce administrative burden.“ These principles are also mentioned in NSF’s 2021 budget request to Congress, as a part of Strategic Goals (e.g., “Enhance NSF’s performance of its mission”) and Strategic Objectives (e.g., “Continually improve agency operations”). Finally, longterm goal outlined in the Strategic Plan is reducing the so-called “dwell time” for research proposals—i.e., the time between when a proposal is submitted and a funding decision is issued.
Incorporating machine learning into proposal review would facilitate progress towards each of these goals. Using machine learning to limit the number of proposals subjected to manual review is a prime example of “making information technology work for us” and would certainly help streamline, standardize, and simplify proposal review. Limiting the number of proposals subjected to manual review would also reduce administrative burden as well as dwell time. In addition, money saved from using machine learning to weed out non-competitive proposals can be used to fund additional competitive proposals, thereby increasing return on investment (ROI) in research-funding programs. Additional benefits include an improved workload for expert reviewers—who will be able to focus on reviewing the scientific merit of competitive proposals instead of wasting time on non-competitive proposals—as well as the establishment of a strong disincentive for PIs to resubmit identical proposals years after years. The latter outcome in particular is expected to improve proposal quality in the long run.
Plan of Action
We propose the following steps to implement and test a machine-learning approach to proposal review:
- Initiate and execute a pilot program that uses machine learning to triage scientific proposals. To demonstrate the reliability of a machine-learning-based approach, the pilot should be carried out in parallel with (and compared to) the traditional method of proposal selection. The pilot would be deemed successful if the machine-learning algorithm was able to reliably identify proposals ranked poorly by human reviewers, and/or proposals rejected unanimously by review panels. NSF—particularly the agency’s Science of Science and Innovation Policy (SciSIP) Program—would be a natural home for such a pilot.
- Showcase pilot results. Following a successful pilot, the next administration should convene experts in machine learning and proposal review from funding agencies, universities, foundations, and grant offices for a day-long workshop. The workshop would showcase pilot results and provide an opportunity for attendees to discuss how to scale the pilot across agencies.
- Scale pilot across federal government. We envision machine learning ultimately becoming a standard component of proposal review across science and technology agencies, improving the efficiency of the funding process without compromising the quality of funded research.
Reducing the numbers of scientific proposals handled by experts without jeopardizing the quality of science funded benefits everyone—high-quality proposals receive support, expert reviewers don’t waste time on non-competitive proposals, and the money saved on manual proposal review can be reallocated to fund additional proposals. Using machine learning to “triage” large submission pools is a promising strategy for achieving such objectives. Preliminary compliance checks are already almost fully automated. Machine learning would simply extend the automation stage one step further. We expect that initial costs of developing appropriate machine-learning algorithms and testing algorithms in pilots would ultimately be justified by greater long-run ROI in research-funding programs. We envision a pilot that could benefit the government but also foundations that are increasingly shouldering research funding. Ideally the pilot would be experimented in two different set-ups: a government funding agency and a Foundation.
Artificial Intelligence and National Security, and More from CRS
The 2019 defense authorization act directed the Secretary of Defense to produce a definition of artificial intelligence (AI) by August 13, 2019 to help guide law and policy. But that was not done.
Therefore “no official U.S. government definition of AI yet exists,” the Congressional Research Service observed in a newly updated report on the subject.
But plenty of other unofficial and sometimes inconsistent definitions do exist. And in any case, CRS noted, “AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria.”
“The Central Intelligence Agency alone has around 140 projects in development that leverage AI in some capacity to accomplish tasks such as image recognition and predictive analytics.” CRS surveys the field in Artificial Intelligence and National Security, updated November 21, 2019.
* * *
The 2018 financial audit of the Department of Defense, which was the first such audit ever, cost a stunning $413 million to perform. Its findings were assessed by CRS in another new report. See Department of Defense First Agency-wide Financial Audit (FY2018): Background and Issues for Congress, November 27, 2019.
* * *
The Arctic region is increasingly important as a focus of security, environmental and economic concern. So it is counterintuitive — and likely counterproductive — that the position of U.S. Special Representative for the Arctic has been left vacant since January 2017. In practice it has been effectively eliminated by the Trump Administration. See Changes in the Arctic: Background and Issues for Congress, updated November 27, 2019.
* * *
Other noteworthy new and updated CRS reports include the following (which are also available through the CRS public website at crsreports.congress.gov).
Resolutions to Censure the President: Procedure and History, updated November 20, 2019
Immigration: Recent Apprehension Trends at the U.S. Southwest Border, November 19, 2019
Air Force B-21 Raider Long Range Strike Bomber, updated November 13, 2019
Precision-Guided Munitions: Background and Issues for Congress, November 6, 2019
Space Weather: An Overview of Policy and Select U.S. Government Roles and Responsibilities, November 20, 2019
Intelligence Community Spending: Trends and Issues, updated November 6, 2019
Limits on Free Expression: An International View
While many countries recognize freedom of speech as a fundamental value, every country also imposes some legal limits on free speech.
A new report from the Law Library of Congress surveys the legal limitations on free expression in thirteen countries: Argentina, Brazil, Canada, China, Israel, Japan, Germany, France, New Zealand, Sweden, the Netherlands, the United Kingdom, and Ukraine.
“In particular, the report focuses on the limits of protection that may apply to the right to interrupt or affect in any other way public speech. The report also addresses the availability of mechanisms to control foreign broadcasters working on behalf of foreign governments,” wrote Ruth Levush in the document summary. See Limits on Freedom of Expression, Law Library of Congress, June 2019.
Some other noteworthy recent reports from the Law Library of Congress include the following.
Initiatives to Counter Fake News in Selected Countries, April 2019
Regulation of Artificial Intelligence in Selected Jurisdictions, January 2019
Pentagon Pursues Artificial Intelligence
Artificial intelligence (AI) technologies such as machine learning are already being used by the Department of Defense in operations in Iraq and Syria, and they have many potential uses in intelligence processing, military logistics, cyber defense, as well as autonomous weapon systems.
The range of such applications for defense and intelligence is surveyed in a new report from the Congressional Research Service.
The CRS report also reviews DoD funding for AI, international competition in the field, including Chinese investment in US AI companies, and the foreseeable impacts of AI technologies on the future of combat. See Artificial Intelligence and National Security, April 26, 2018.
“We’re going to have self-driving vehicles in theater for the Army before we’ll have self-driving cars on the streets,” Michael Griffin, the undersecretary of defense for research and engineering told Congress last month (as reported by Bloomberg).
Other new and updated reports from the Congressional Research Service include the following.
Foreign Aid: An Introduction to U.S. Programs and Policy, April 25, 2018
OPIC, USAID, and Proposed Development Finance Reorganization, April 27, 2018
OPEC and Non-OPEC Crude Oil Production Agreement: Compliance Status, CRS Insight, April 26, 2018
What Is the Farm Bill?, updated April 26, 2018
A Shift in the International Security Environment: Potential Implications for Defense–Issues for Congress, updated April 26, 2018
Navy Aegis Ballistic Missile Defense (BMD) Program: Background and Issues for Congress, updated April 27, 2018
China Naval Modernization: Implications for U.S. Navy Capabilities — Background and Issues for Congress, updated April 25, 2018
Russian Compliance with the Intermediate Range Nuclear Forces (INF) Treaty: Background and Issues for Congress, updated April 25, 2018
The First Responder Network (FirstNet) and Next-Generation Communications for Public Safety: Issues for Congress, April 27, 2018
African American Members of the United States Congress: 1870-2018, updated April 26, 2018
JASON: Artificial Intelligence for Health Care
The field of artificial intelligence is habitually susceptible to exaggerated claims and expectations. But when it comes to new applications in health care, some of those claims may prove to be valid, says a new report from the JASON scientific advisory panel.
“Overall, JASON finds that AI is beginning to play a growing role in transformative changes now underway in both health and health care, in and out of the clinical setting.”
“One can imagine a day where people could, for instance, 1) use their cell phone to check their own cancer or heart disease biomarker levels weekly to understand their own personal baseline and trends, or 2) ask a partner to take a cell-phone-based HIV test before a sexual encounter.”
Already, automated skin cancer detection programs have demonstrated performance comparable to human dermatologists.
The JASON report was requested and sponsored by the U.S. Department of Health and Human Services. See Artificial Intelligence for Health and Health Care, JSR-17-Task-002, December 2017.
Benefits aside, there are new opportunities for deception and scams, the report said.
“There is potential for the proliferation of misinformation that could cause harm or impede the adoption of AI applications for health. Websites, apps, and companies have already emerged that appear questionable based on information available.”
Fundamentally, the JASONs said, the future of AI in health care depends on access to private health data.
“The availability of and access to high quality data is critical in the development and ultimate implementation of AI applications. The existence of some such data has already proven its value in providing opportunities for the development of AI applications in medical imaging.”
“A major initiative is just beginning in the U.S. to collect a massive amount of individual health data, including social behavioral information. This is a ten year, $1.5B National Institutes of Health (NIH) Precision Medicine Initiative (PMI) project called All of Us Research Program. The goal is to develop a 1,000,000 person-plus cohort of individuals across the country willing to share their biology, lifestyle, and environment data for the purpose of research.”
But all such efforts raise knotty questions of data security and personal privacy.
“PMI has recognized from the start of this initiative that no amount of de-identification (anonymization) of the data will guarantee the privacy protection of the participants.”
Lately, the US Government has barred access by non-US researchers to a National Cancer Institute database concerning Medicare recipients, according to a story in The Lancet Oncology. See “International access to major US cancer database halted” by Bryant Furlow, January 18, 2018 (sub. req’d.).