Dominant research-funding paradigms constrain the outputs of America’s innovation systems. Federal research-funding agencies like the National Institutes of Health (NIH) and the National Science Foundation (NSF) operate largely through milestone-scoped grants that fail to incentivize high-risk research, impose highly burdensome reporting requirements, and are closely managed by the government. Philanthropically-funded research organizations are an excellent mechanism to experiment with different research management approaches. However, they are perennially underfunded and rarely have a path to long-term sustainability.
A single program with two pieces can address this issue:
First, the NSF’s new Technology, Innovation, and Partnership (TIP) Directorate should pilot a “organizations, not projects” program in which philanthropically matched grants fund a portfolio of independent research organizations instead of funding specific research initiatives. Partnering with philanthropies will leverage the diversity of American donors to identify a portfolio of research organizations with diverse constraints (and therefore the potential to create outlier outcomes). To have a significant impact, this pilot funding opportunity should be funded at $100 million per year for 10 years.
Second, drawing on the ideas of Kanjun Qiu and Michael Nielsen, the NSF should set aside an additional $100 million per year to sponsor independent research organizations with impressive track records for extended periods of time. This commitment to “acquire” successful organizations will complement Part One’s research-funding opportunity in two ways. First, it will encourage philanthropic participation by making philanthropies feel like their money is going towardssomething that won’t die the moment they stop funding it. Additionally, it will enable the federal government to leverage the institutional knowledge created by successful experiments in research funding and management.
If successful, this two-part program can be later replicated by other federal agencies. The Administration and Congress should prioritize funding this program in recognition of three converging facts: one, that federal spending on research and development (R&D) is increasing; two, that the American innovation ecosystem is not working as well as it once did; and three, that the proliferation of new institutional structures for managing research (e.g., Focused Research Organizations, private Advanced Research Projects Agencies (ARPAs), “science angels”, etc. Swift action could use the increased budgets to empower new organizations to experiment with new ways of organizing R&D in order to address the current system’s sclerosis!
Challenge and Opportunity
There is a growing consensus that there is a gap between the speed and efficiency of R&D projects closely managed by the government and R&D projects managed by the private sector.
Federal funding is a major part of the American R&D ecosystem. However, most federal research funding comes with a litany of constraints: earmarks that prevent researchers from spending grant money on things they think are most important (like equipment or lab automation), onerous reporting requirements, the need to get every proposal through a committee, and dozens of hours of grant writing for shockingly small amount of money. Moreover, studies have found that with a mandate to fund innovative research, federal funding decisions tend to be risk-averse.
As a result, in situations where there’s a head-to-head comparison between government-managed research and technology development and privately-managed counterparts, there’s little question which is more efficient.
This efficiency gap exists largely because privately-managed organizations often push control over research funds to the organization or level where the “research design” occurs. This yields powerful results. Former Defense Advanced Research Projects Agency director Arati Prabhakar argues that this mechanism, in the form of empowering program managers, is a big part of why the ARPA model works. In the business world, coupling power (money) and responsibility (research design) is simply common sense. In the research world, the benefits of “embedded autonomy” are straightforward. Autonomy enables an organization or individual to react quickly to unexpected circumstances. Research is highly uncertain by nature. Coupling embedded autonomy with research design means that funding will be spent in the most useful way possible at a given moment based on knowledge gained as experimentation progresses — not in the way that a researcher thought would be most useful at the time they submitted their grant proposal.
Recognizing the power of embedded autonomy to enable powerful, diverse research, there is currently an explosion of experiments in non-academic research organizations. Many are too new to have clear results, but non-academic research organizations — including HHMI Janelia, Dynamicland, Willow Garage, and early SpaceX — have created new fields, won Nobel prizes, and changed the paradigms of entire industries. But even the most successful research organizations struggle to raise money unless there is a clear business case, which leaves public-goods oriented research in the lurch. Philanthropists are strongly motivated by legacy, so they want to fund things that will last. As a result, Private funders often hesitate to fund research organizations that produce public-good R&D.
Understanding this problem suggests a potent new way of deploying the federal government’s R&D budget: partnering with philanthropists to build a diverse portfolio of research organizations with autonomy over their own budgets, and then providing long-term support to the most effective of those organizations.
In other words, the federal government should experiment with funding organizations rather than projects.
Such an approach would position the federal government to act like a limited partner (LP) in multiple venture capital funds. In this capacity, the federal government would avoid setting overly specific requirements around how a particular grant is spent. The government would instead set very high-level priorities (e.g., “create new manufacturing paradigms” or simply “do impactful research”), give funded organizations the autonomy to figure out how to best achieve this goal, and then evaluate success after the fact.
The time is right to invest in creative federal research-funding approaches. There is bipartisan support for large increases to federally funded R&D. But pushing huge amounts of money through outdated R&D funding structures is like slamming on the accelerator of a car that needs an engine repair: incredibly inefficient and with the potential to backfire. By contrast, embedding autonomy in a diverse portfolio of organizations could unlock the sort of unexpected, game-changing inventions and discoveries that have driven the American economy: electricity, airplanes, the internet, the transistor, cryptography, and more.
Plan of Action
The current Administration should launch a two-part program at NSF to test a research-funding system that prioritizes organizations over projects.
As Part One of this program, the NSF’s TIP Directorate should pilot a research-funding opportunity in which philanthropically matched grants fund a portfolio of independent research organizations instead of funding specific research initiatives. This pilot funding opportunity should be funded at $100 million per year for 10 years. The Directorate should target funding between 5 and 15 organizations this way, quadratically matching philanthropic funds at values between 100% and 1000% depending on the number of participating philanthropic donors.
As Part Two of this program, NSF should set aside an additional $100 million per year to sponsor independent research organizations with impressive track records for extended periods of time. The Directorate should set a goal of identifying two organizations during the ten-year pilot that would be good candidates for this long-term funding, funding each at $50 million per year.
More detail on each of these program components is provided below.
Part One: Philanthropically matched grants
Partnering with private donors is key to the success of the proposed organization-focused funding opportunity. By funding only organizations that have already raised philanthropic dollars, the federal government will leverage philanthropists’ due diligence on screening applicants to ensure high-potential awardees. Similarly, the funding opportunity should employ quadratic matching funding to use donors’ confidence as an indicator of how much money to give each organization and to reduce bias favoring organizations that are able to raise a large amount of money from a small number of donors.
Leveraging philanthropic opinion in this way does come with the risk of biasing awards towards organizations working on particularly popular areas or that are particularly good at sales or marketing. The organization-focused funding opportunity could address this risk by establishing a parallel funding pathway whereby a large number of researchers can file a petition for an organization to be selected for funding.
The TIP Directorate obviously must impose additional criteria beyond the endorsements of the philanthropic and research communities. It will be tempting for the Directorate to prioritize funding organizations working on specific, high-interest technology areas or themes. But the goal of this program is to advance the long term health of the American innovation ecosystem. Often, tomorrow’s high-priority area is one that doesn’t even exist today. To that end, the Directorate should evaluate potential grantee organizations on their “counterfactual impact”: i.e., their capacity to do work that is disincentivized in other institutional structures.
The question of how best to evaluate success of the funding opportunity is a challenging one. It is notoriously hard to evaluate long-term research output. The whole point of this proposal is to move away from short-term metrics and rigid plans, but at the same time the government needs to be responsible to taxpayers. Metrics are the most straightforward way to evaluate outcomes. However, metrics are potentially counterproductive ways to evaluate new and experimental processes because existing metrics presume a specific way of organizing research. We therefore recommend that the TIP Directorate create a Notice of Funding Opportunity to hire an independent, nonpartisan, and nonprofit board whose job is to holistically evaluate funded organizations. The board should include people working in academia, industrial research, government research, and independent research organizations, as well as some “wildcards”. The board should collectively have deep experience performing and guiding high-uncertainty, long-term research and development.
The board would regularly (but not over-frequently) solicit opinions on output and impacts of funded organizations from the program’s philanthropic partners, members of the government, people working with the organizations, unaffiliated researchers, and members of the organizations themselves. At the end of each year, the board should give each organization an evaluation “report card” containing a holistic letter grade and an explanation for that grade. Organizations that receive an F should immediately be expelled from the funding program, as should organizations that receive a D for three years in a row.
Part Two: Invest deeply in demonstrated success
Kanjun Qiu and Michael Nielsen have proposed an important piece of the puzzle: In the same way that governments took over funding libraries once they were started by Gilded Age philanthropists, the government should take over funding immensely successful research organizations today.
At the five-year midpoint and ten-year endpoint of the pilot funding program, the evaluation board should identify any funded organizations that have produced outstanding output. The TIP Directorate should then select up to two of these candidates to receive indefinite government support, at a funding level of $50 million per organization per year. These indefinitely funded organizations would become a line item in the TIP’s budget, to be renewed every year except in extreme circumstances. The possibility of indefinite federal support as an “exit strategy” for philanthropic funders will encourage participation of additional philanthropic partners by providing (i) philanthropically funded organizations a pathway for becoming self-sustaining, and (ii) philanthropies with a clear opportunity to establish a legacy.
What qualifies as “outstanding output”? Like evaluating success, it’s a challenging question. We recommend using the same board-based grading scheme outlined above. Any organization that receives an A grade in two of the past five years or an A+ in any one of the past five years should be eligible for indefinite support. This approach will require grading to be very strict: for instance, an A+ should only be given to an organization that enables Nobel-prize-quality work.
Building portfolios of independent research organizations is an incredibly effective way of spending government research money. The total federal research budget is almost $160 billion per year. Less than 1% of that could make a massive difference for independent research organizations, most of which have budgets in the $10 million range. Funding especially promising independent research organizations with an additional $10 million or more per year would have a huge effect, empowering organizations that are already doing outstanding work to take their contributions to the next level.
Even the highest-performing private research organizations in the world — like Google DeepMind and HHMI Janelia Farm — have budgets in the range of $200 million per year. Sponsoring a select number of especially high-performing research organizations with an additional $100 million per year would hence have similarly transformative impacts. These large indefinite grants would also provide the major incentives needed to bring the world’s leading philanthropies to the table and to encourage the most cutting-edge independent research organizations to dedicate their talents to the public sector. The sum total of achieving these outcomes would still account for only a tiny fraction of the overall federal R&D budget.
Finally, we emphasize that the goal of this pilot program is not solely to establish an independent research organization portfolio in the TIP Directorate. It is also an opportunity to test a novel research-funding mechanism that could be replicated at numerous other federal agencies.
As part of its American Pandemic Preparedness plan, the Biden Administration should establish an interagency working group (IWG) focused exclusively on the design, funding, and implementation of advance market commitments (AMCs) and prizes for vaccine development. Under an AMC, pharmaceutical companies commit to providing many vaccine doses at a fixed price in return for a per-dose federal subsidy. Prizes can support AMCs by rewarding companies for meeting intermediate technical goals.
The IWG would immediately convene experts to identify suitable targets for ambitious vaccine-development and deployment efforts. The group would then work with stakeholders to implement AMCs and prizes crafted around these targets, offering a concrete and durable demonstration of the Administration’s commitment to proactive pandemic preparedness. As the American Pandemic Preparedness plan argues, an important part of rapid vaccine deployment is maintaining “hot manufacturing capacity”. Clear federal AMCs would create the market incentive needed to sustain such capacity, while simultaneously advancing procurement expertise within the federal government, in line with recent recommendations from a government review on the US supply chain.
Challenge and Opportunity
Vaccines are very cost-effective medical interventions that have played a large role in reducing pathogen-induced deaths over the last 200 years. But vaccines do not yet exist for many diseases, including diseases concentrated in the developing world. Vaccines are undersupplied relative to their social benefit because their target populations are often poor and because strong political pressure for lower prices leads to low expected profits. When new vaccines are approved, scaling up production to fully supply low and middle-income countries (LMICs) can take up to 15 years. AMCs solve these issues by incentivizing vaccine development and hastening production scale-up. Prizes play an intermediate role by offering rewards for meeting technical goals along the way.
Vaccine AMCs have a track record of success. In 2007, GAVI, a public-private global health partnership based out of Geneva, launched an AMC for a pneumococcal conjugate vaccine (PCV) that covered pneumococcal strains more common in the developing world. The partnership received its first supply offers in 2009 (a fairly rapid response enabled by the fact that some PCV candidates were already in late-stage clinical trials). Compared to the rotavirus vaccine — which was developed around the same time but did not receive an AMC — PCVs achieved 3–4x greater coverage (defined as the fully vaccinated fraction of the target population). Moreover, new vaccines typically take about 10–15 years to become widely available in LMICs. PCV became available in those countries within a year. This example demonstrates the capacity of AMCs to incentivize rapid scaling. More recently, the United States (through Operation Warp Speed) and several other countries and organizations purchased substantial COVID-19 vaccine doses far in advance of approval, albeit using a more flexible AMC model that prioritized scaling production before data from clinical trials were available.
Plan of Action
To build on the progress and demonstrated success outlined above, the Biden Administration should invest in AMCs and prizes for vaccine development and deployment as part of its American Pandemic Preparedness plan. Below, we detail three specific recommendations for moving forward.
Recommendation 1. Form an Interagency Working Group (IWG) on Rapid Vaccine Innovation
Roles and responsibilities
Vaccine development and manufacturing is a multi-stage process that is too complicated for any single federal agency to manage. The Biden Administration should issue an Executive Order establishing an IWG on Rapid Vaccine Innovation.
Under emergency circumstances, the IWG would be the government hub for time-sensitive vaccine-procurement efforts. Under normal (non-pandemic) circumstances the IWG would focus on extant communicable diseases with a high disease burden and on potential future threats. This latter function would be carried out as follows.
1. Vaccine targeting. A “horizon scanning” IWG subgroup would identify priority targets for rapid vaccine development and broad deployment. The subgroup would consider factors such as pandemic potential, current disease burden, and vaccine tractability. The IWG would also consult with scientists at the VRC (whose work was essential to the rapid development of COVID-19 vaccines, and who already focus on viruses with pandemic potential) and at the CDC (which already performs pathogen surveillance) in making its determinations. Options for initial vaccine targets could include:
- A universal coronavirus vaccine in response to the emergence of potentially immune-evading variants of COVID-19.
- A universal influenza vaccine, like the one already under early-stage development at the National Institutes of Health (NIH).
- A vaccine against Group A streptococcus (GAS). GAS kills about 500,000 people globally annually, mostly through heart and kidney complications or severe infections. Much of this burden falls on LIMCs. GAS also drives high use of antibiotics, which may contribute to antibiotic resistance. A successful AMC for a GAS vaccine would save hundreds of thousands of lives. Fortunately, there are multiplepromising GAS vaccine candidates in early trials. A human-challenge model with potential to accelerate development already exists, and relevant experts and the World Health Assembly acknowledge that GAS prevention should be prioritized. Since two of the leading vaccine candidates are being developed by close U.S. allies (Australia and Canada), prioritizing GAS vaccine development would have the added benefit of strengthening us and our allies as global tensions rise.
- A better tuberculosis vaccine. The technological distance to a better tuberculosis vaccine is greater than the technological distance to a GAS vaccine. But since tuberculosis likely kills twice as many people each year, development of a tuberculosis vaccine would also have a greater payoff.
- An AMC could be deployed to incentivize rapid scale-up of the recently tested malaria vaccine. This could be a flagship program of the United States’ response to China: the Build Back Better World (B3W) initiative, which includes “health and health security” as one of its four priorities. Scaling up deployment of the malaria vaccine in Africa and Southeast Asia would be an excellent way for the United States to regain influence lost in those regions to China’s Belt and Road initiative.
- Recent studies indicate a strong connection between multiple sclerosis and the epstein-barr virus (EBV) and Moderna has recently performed early-stage trials targeting EBV with an mRNA vaccine candidate. Acutely, EBV causes mononucleosis and has been linked with multiple cancers and autoimmune diseases.
- The Strategic National Stockpile (SNS) purchases and stores substantial quantities of vaccines and therapeutics for availability during an emergency. As more countermeasures are developed and then stocked, the financial burden of maintaining the stockpile increases, since expired medications must be replenished over time. There is already an FDA initiative to extend the shelfspan of therapeutics but a targeted strategy to develop vaccines that are shelf-stable for longer and in more varied conditions could reduce the budgetary burden of stockpile maintenance.
2. Incentive design. Once one or more vaccine targets are identified, an IWG subgroup comprising health economists and budget officers would design the AMC(s) and intermediate prizes intended to spur development and deployment of the target(s). Incentive design would (i) be carried out with substantial input from BARDA, which is familiar with the vaccine-manufacturing landscape, and (ii) consider both the technological distance of the target and market competitiveness. An output from this step would be a Vaccine Incentive Roadmap describing the different prizes and incentives that federal agencies will offer to ensure fast, consistent progress towards development and deployment of the target(s) in question. In other words, the linked prizes included in the roadmap will produce sustained incentives for continued forward progress on vaccine development. More information on this roadmap is provided below.
Structure and participation
The IWG should be structured as an integration, with each participating agency providing specific expertise on each aspect of the IWG’s charge. Participants should include senior leaders from the Biomedical Advanced Research and Development Authority (BARDA), the Centers for Disease Control and Prevention (CDC), the Department of Defense (DOD), the Food and Drug Administration (FDA), the U.S. Agency for International Development (USAID), US International Development Finance Corporation (DFC), and the Vaccine Research Center (VRC). BARDA has a track record of successful procurement of vaccines and expertise in negotiating with manufacturers. VRC’s founding mission is vaccine development and it has collaborated with manufacturers on large-scale production for multiple vaccines. They would provide expertise on vaccine tractability. Through upfront guidance on minimum efficacy requirements, the FDA will ensure vaccine standards. FDA will also work with global regulators on the possibility of regulatory reciprocity, akin to their PEPFAR program, which assists low-resource regulators in low and middle-income countries with decision-making.
The IWG should be chaired by a biosecurity expert housed at the White House Office of Science and Technology Policy (OSTP).
The IWGs recommendations (regarding both targets and AMC/prize design), once finalized, would be submitted to the Senate Health and House Ways and Means Subcommittee to request funding. Because federal agencies must notify Congress if they plan to disburse large prize sums (with agency-specific thresholds), this submittal would also serve as the required formal notification to Congress of prize amounts.
Recommendation 2. Carry out the IWG’s Vaccine Incentive Roadmap
After the IWG has issued its recommendations on vaccine target(s) and incentive (AMC and prizes) design, implementation must follow. Where implementation support comes from will depend on the “technological distance” of the target(s) in question.
Early-stage development focused on in-vitro or animal research should be supported with prizes from BARDA, the Department of Health and Human Services (HHS), and NIH. All federal agencies already have the authority to award prizes under the America Competes Act. Initial prizes could be awarded to vaccine candidates that successfully protect an animal model against disease. Later prizes could be awarded to candidates that hit clinical milestones such as completion of a successful Phase 1 trial in humans. We note that while agencies can theoretically pool funds for a multi-stage prize, cumbersome interagency processes mean that it will likely be easier to have separate agencies fund and oversee the separate prizes included in the roadmap.
Later-stage development should be supported with larger prizes or purchases from USAID and DOD. Once a vaccine candidate has reached early-stage human clinical testing, larger prizes and/or different funding mechanisms will likely be required to advance that candidate to later-stage human testing. This is because costs of moving a vaccine candidate from the preclinical stage to the end of phase 2A (early-stage human clinical testing) range from $14 to $159 million dollars.
It is unlikely that a single federal agency would have the discretionary funds or willingness to sponsor a prize sufficient to incentivize participation in this process. Federal partnerships with private-sector entities and/or philanthropies could supplement federal prize funding. The promise of being a government-approved vendor of a vaccine or a DOD-supported prototype would serve as incentive for external entities to enter into such partnerships. USAID could also leverage its relationships with global health stakeholders and funders to provide incentive funding. Of course, external funding partnerships would be unnecessary if Congress appropriated sufficient designated funding for large vaccine-incentive prizes to relevant agencies.
An alternative to prize funding that would be appropriate for incentivizing later-stage R&D is use of the DOD’s Defense Commercial Solutions Opening (CSO) purchasing authority. DOD could use its CSO authority to pre-purchase vaccine doses in large quantities, effectively creating an AMC. Purchases of up to $100 million can be made through CSO authority. Early prize negotiations would use the leverage provided by becoming a government-approved vendor of vaccines (part of the CSO process) to negotiate for fair prices. A second DOD purchase authority that could be used as an AMC-like incentive is the Other Transaction Authority (OTA), which exempts the DOD from some federal procurement regulations. OTA authority could likely be used to support vaccine research, purchase vaccine prototypes, and pay for some manufacturing of a successful prototype. OTA has also been used to fund research consortia, a possible alternative to a multi-stage prize roadmap. Purchases of up to $20 million can be made through OTA authority. In the context of diseases that affect low and middle income countries, a loan from the US International Development Finance Corporation (DFC) may be an option for supplementing an AMC.
Recommendation 3. Permanently expand BARDA’s mandate to include all communicable diseases, expand BARDA’s funding, and make BARDA the IWG’s permanent home
An IWG is a powerful tool for bringing federal agencies together. With existing prize authority and an administration that prioritizes vaccine development and deployment, much could be accomplished through only the steps outlined above. However, achieving truly transformative results requires a permanent and sustainably funded federal agency to be working consistently on advancing vaccines. Otherwise, future administrations may cancel ongoing IWG projects and/or fail to follow through. As the part of the federal government with the most expertise in therapeutics procurement, BARDA is an ideal permanent home for the IWG’s functions.
BARDA’s mandate is currently limited to biological, chemical, or radiological threats to the health of Americans. This mandate should be expanded to include all important communicable diseases. The newly empowered BARDA would manage public-private partnerships for vaccine procurement, while the NIH would remain the fundamental health-research arm of the U.S. government. Expanding BARDA’s mandate would require Congressional action. Congress would need to amend the Pandemic and All-Hazards Preparedness and Advancing Innovation Act appropriately, and would also need to appropriate specific funding for BARDA to carry out the roles and responsibilities of the IWG over the long term.
Prizes and AMCs only pay out when a product that meets pre-specified requirements is approved, so taxpayers won’t pay for any failures.
For technologically “close” vaccine targets with a high chance of imminent Phase 3 trial success, an AMC incentivizes rapid scale-up of manufacturing and ensures that more doses reach more people sooner. The AMC does this by circumventing a type of “hold-up” problem wherein purchasers negotiate vaccine prices down to per-unit costs. The 2007 GAVI Pneumococcus AMC was of this type. A GAS or malaria vaccine would similarly be “close” targets.
For more technologically distant targets, AMCs should incorporate “kill switches” that give future customers of the vaccine an effective veto over the AMC by way of not paying co-payments. This feature is designed to be a final check on the utility of a vaccine and avoids the difficulty of specifying standards for a vaccine many years ahead of time. An AMC structured in this way works well if a company manufactures a vaccine that meets pre-specified technical details but for hard-to-predict reasons is not useful.
For an especially distant target, a series of prize competitions could substitute for a traditional AMC. In this scenario, an initial prize could be awarded for any vaccine candidates that successfully protect an animal model against disease. A later prize could be awarded to candidates that hit clinical milestones such as completion of a Phase 1 trial in humans.
Other details of AMC and/or prize implementation depend on the market structure and cannot be determined ahead of time. For instance, the optimal AMC design is very different in monopoly versus competitive markets.
Operation Warp Speed spent about $12 billion dollars on COVID-19 vaccine development and purchased hundreds of millions of vaccine doses far in advance of approval or clinical trials. While this was very effective, it is unlikely that Congress would be willing to appropriate such a large sum of money — or see that money disbursed so freely — in non-pandemic situations. A multi-stage prize process still incentivizes vaccine development and deployment but does so for a lower cost.
The government could fund research into market segmentation for vaccines, since many who are vaccine-hesitant are avid consumers of alternative health products/supplements. There may be marketing and promotional strategies inspired by “natural” supplements that can increase vaccine uptake.
The federal government does fund influenza vaccine preparation, but that funding is only for a seasonal flu vaccine that works with 40–60% efficacy: a rate that is well below what other vaccines, such as the measles (97%) and mumps vaccines (88%) achieve. A pandemic influenza with an unexpected genetic background could still catch us by surprise. Investing in a universal influenza vaccine is essential in preparing for that eventuality.
One issue is staffing. Drafting a high-quality AMC contract may require legal and economic expertise that isn’t available in-house at federal agencies, so the administration may need to engage external AMC experts. Another issue may be ensuring that activities outlined herein do not fall between interagency “cracks”. Assigning dedicated staff to oversee each activity will be important. A third issue is the potential for interagency friction. The more agencies that are involved with prize design, the longer it may take to design and authorize a given prize. One possible solution is to have only one agency administer each prize, with informal input from staff in other agencies when required.
Scientists and scholars in the United States are faced with a relatively narrow set of traditional career pathways. Our lack of creativity in defining the scholarly landscape is limiting our nation’s capacity for innovation by stifling exploration, out-of-the-box thinking, and new perspectives.
This does not have to be the case. The rise of the gig economy has positioned independent scholarship as an effective model for people who want to continue doing research outside of traditional academic structures, in ways that best fit their life priorities. New research institutes are emerging to support independent scholars and expand access to the knowledge economy.
The Biden-Harris Administration should further strengthen independent scholarship by (1) facilitating partnerships between independent scholarship institutions and conventional research entities; (2) creating professional-development opportunities for independent scholars; and (3) allocating more federal funding for independent scholarship.
Challenge and Opportunity
The academic sector is often seen as a rich source of new and groundbreaking ideas in the United States. But it has become increasingly evident that pinning all our nation’s hopes for innovation and scientific advancement on the academic sector is a mistake. Existing models of academic scholarship are limited, leaving little space for any exploration, out-of-the-box thinking, and new perspectives. Our nation’s universities, which are shedding full-time faculty positions at an alarming rate, no longer offer as reliable and attractive career opportunities for young thinkers as they once did. Conventional scholarly career pathways, which were initially created with male breadwinners in mind, are strewn with barriers to broad participation. But outside of academia, there is a distinct lack of market incentive structures that support geographically diverse development and implementation of new ideas.
These problems are compounded by the fact that conventional scholarly training pathways are long, expensive, and unforgiving. A doctoral program takes an average of 5.8 years and $115,000 to complete. The federal government spends $75 billion per year on financial assistance for students in higher education. Yet inflexible academic structures prevent our society from maximizing returns on these investments in human capital. Individuals who pursue and complete advanced scholarly training but then opt to take a break from the traditional academic pipeline — whether to raise a family, explore another career path, or deal with a personal crisis — can find it nearly impossible to return. This problem is especially pronounced among first-generation students, women of color, and low income groups. A 2020 study found that out of the 67% of Ph.D. students who wanted to stay in academia after completing their degree, only 30% of those people did. Outside of academia, though, there are few obvious ways for even highly trained individuals to contribute to the knowledge economy. The upshot is that every year, innumerable great ideas and scholarly contributions are lost because ideators and scholars lack suitable venues in which to share them.
Fortunately, an alternative model exists. The rise of the gig economy has positioned independent scholarship as a viable approach to work and research. Independent scholarship recognizes that research doesn’t have to be a full-time occupation, be conducted via academic employment, or require attainment of a certain degree. By being relatively free of productivity incentives (e.g., publish or perish), independent scholarship provides a flexible work model and career fluidity that allows people to pursue research interests alongside other life and career goals.
Online independent-scholarship institutes (ISIs) like the Ronin Institute, IGDORE, and others have recently emerged to support independent scholars. By providing an affiliation, a community, and a boost of confidence, such institutes empower independent scholars to do meaningful research. Indeed, the original perspectives and diverse life experiences that independent scholars bring to the table increase the likelihood that such scholars will engage in high-risk research that can deliver tremendous benefits to society.
But it is currently difficult for ISIs to help independent scholars reach their full potential. ISIs generally cannot provide affiliated individuals with access to resources like research ethics review boards, software licenses, laboratory space, scientific equipment, computing services, and libraries. There is also concern that without intentionally structuring ISIs around equity goals, ISIs will develop in ways that marginalize underrepresented groups. ISIs (and individuals affiliated with them) are often deemed ineligible for research grants, and/or are outcompeted for grants by well-recognized names and affiliations in academia. Finally, though independent scholarship is growing, there is still relatively little concrete data on who is engaging in independent scholarship, and how and why they are doing so.
Strengthening support for ISIs and their affiliates is a promising way to fast-track our nation towards needed innovation and technological advancements. Augmenting the U.S. knowledge-economy infrastructure with agile ISIs will pave the way for new and more flexible scholarly work models; spur greater diversity in scholarship; lift up those who might otherwise be lost Einsteins; and increase access to the knowledge economy as a whole.
Plan of Action
The Biden-Harris Administration should consider taking the following steps to strengthen independent scholarship in the United States:
- Facilitate partnerships between independent scholarship institutions and conventional research entities.
- Create professional-development opportunities for independent scholars.
- Allocate more federal funding for independent scholarship.
More detail on each of these recommendations is provided below.
1. Facilitate partnerships between ISIs and conventional research entities.
The National Science Foundation (NSF) could provide $200,000 to fund a Research Coordination Network or INCLUDES alliance of ISIs. This body would provide a forum for ISIs to articulate their main challenges and identify solutions specific to the conduct of independent research (see FAQ for a list) — solutions may include exploring Cooperative Research & Development Agreements (CRADAs) as mechanisms for accessing physical infrastructure needed for research. The body would help establish ISIs as recognized complements to traditional research facilities such as universities, national laboratories, and private-sector labs.
NSF could also include including ISIs in its proposed National Networks of Research Institutes (NNRIs). ISIs meet many of the criteria laid out for NNRI affiliates, including access to cross-sectoral partnerships (many independent scholars work in non-academic domains), untapped potential among diverse scholars who have been marginalized by — or who have made a choice to work outside of — conventional research environments, novel approaches to institutional management (such as community-based approaches), and a model that truly supports the “braided river” or ”ecosystem” career pathway model.
The overall goal of this recommendation is to build ISI capacity to be effective players in the broader knowledge-economy landscape.
2. Create professional-development opportunities for independent scholars.
To support professional development among ISIs, The U.S. Small Business Administration and/or the NSF America’s Seed Fund program could provide funding to help ISI staff develop their business models, including funding for training and coaching on leadership, institutional administration, financial management, communications, marketing, and institutional policymaking. To support professional development among independent scholars directly, the Office of Postsecondary Education at the Department of Education — in partnership with professional-development programs like Activate, the Department of Labor’s Wanto, and the Minority Business Development Agency — can help ISIs create professional-development programs customized towards the unique needs of independent scholars. Such programs would provide mentorship and apprenticeship opportunities for independent scholars (particularly for those underrepresented in the knowledge economy), led by scholars experienced with working outside of conventional academia.
The overall goal of this recommendation is to help ISIs and individuals create and pursue viable work models for independent scholarship.
3. Allocate more federal funding for independent scholarship.
Federal funding agencies like NSF struggle to diversify the types of projects they support, despite offering funding for exploratory high-risk work and for early-career faculty. A mere 4% of NSF funding is provided to “other” entities outside of private industry, federally supported research centers, and universities. But outside of the United States, independent scholarship is recognized and funded. NSF and other federal funding agencies should consider allocating more funding for independent scholarship. Funding opportunities should support individuals over institutions, have low barriers to entry, and prioritize provision of part-time funding over longer periods of time (rather than full funding for shorter periods of time).
Funding opportunities could include:
- Funding for seed-grant programs administered by ISIs. Federal agencies already have authority to support seed-grant programs — like the National Aeronautics and Space Agency (NASA)’s impactful program at Earth Science Information Partners — as prizes competitions.
- Funding research awards for individual independent scholars. For instance, Congress could consider amending the 2021 Supporting Early-Career Researchers Act to allow NSF to award funding to researchers who are not affiliated with an “institution of higher education”, as well as to award part-time funding.
- An NSF program that exclusively funds innovative, high-risk research led by scholars outside of universities, federally supported research centers, and private-sector labs.
- An NSF-funded research effort to capture basic information about independent scholars in order to provide them with better support. The effort would strive to understand why independent scholars choose not to work with a conventional research institution, what their work models look like, and their greatest challenges and needs.
Our nation urgently needs more innovative, broadly sourced ideas. But limited traditional career options are discouraging participation in the knowledge economy. By strengthening independent scholarship institutes and independent scholarship generally, the Biden-Harris Administration can help quickly diversify and grow the pool of people participating in scholarship. This will in turn fast-track our nation towards much-needed scientific and technological advancements.
The traditional academic pathway consists of 4–5 years of undergraduate training (usually unfunded), 1–3 years for a master’s degree (sometimes funded; not always a precondition for enrollment in a doctoral program), 3–6+ years for a doctoral degree (often at least partly funded through paid assistantships), 2+ years of a postdoctoral position (fully funded at internship salary levels), and 5–7 years to complete the tenure-track process culminating in appointment to an Associate Professor position (fully funded at professional salary levels).
Independent scholarship in any academic field is, as defined by the Effective Altruism Forum, scholarship “conducted by an individual who is not employed by any organization or institution, or who is employed but is conducting this research separately from that”.
Independent scholars can draw on their varied backgrounds and professional experience to bring fresh and diverse worldviews and networks to research projects. Independent scholars often bring a community-oriented and collaborative approach to their work, which is helpful for tackling pressing transdisciplinary social issues. For students and mentees, independent scholars can provide connections to valuable field experiences, practicums, research apprenticeships, and career-development opportunities. In comparison to their academic colleagues, many independent scholars have more time flexibility, and are less prone to being influenced by typical academic incentives (e.g., publish or perish). As such, independent scholars often demonstrate long-term thinking in their research, and may be more motivated to work on research that they feel personally inspired by.
An ISI is a legal entity or organization (e.g, a nonprofit) that offers an affiliation for people conducting independent scholarship. ISIs can take the form of research institutes, scholarly communities, cooperatives, and others. Different ISIs can have different goals, such as emphasizing work within a specific domain or developing different ways of doing scholarship. Many ISIs exist solely online, which allows them to function in very low-cost ways while retaining a broad diversity of members. Independent scholarship institutes differ from professional societies, which do not provide an affiliation for individual researchers.
As the Ronin Institute explains, federal grant agencies and many foundations in the United States restrict their support to individuals affiliated with legally recognized classes of institutions, such as nonprofits. For individual donors, donations made to independent scholars via nonprofits are tax-deductible. Being affiliated with a nonprofit dedicated to supporting independent scholars enables those scholars to access the funding needed for research. In addition, many independent scholars find value in being part of a community of like-minded individuals with whom they can collaborate and share experiences and expertise.
- Canadian Academy of Independent Scholars (Canada)
- Independent Scholars Association of Australia (Canada)
- Slowopen Science Laboratory (France)
- Campus Orléon (Netherlands)
- Institute for Globally Distributed Open Research and Education (Sweden)
- Complex Biological Systems Alliance (United States)
- Institute for Historical Study (United States)
- Integrated Behavioral Health Research Institute (United States)
- Minnesota Independent Scholars’ Forum (United States)
- Ronin Institute for Independent Scholarship (United States)
- San Diego Independent Scholars (United States)
- Postdoctoral Institute for Computational Studies (United States)
- Princeton Research Forum (United States)
Universities are designed to support large complex grants requiring considerable infrastructure and full-time support staff; their incentive structures for faculty and students mirror these needs. In contrast, research conducted through an independent-scholarship model is often part-time, inexpensive, and conducted by already trained researchers with little more than a personal computer. With their mostly online structures, ISIs can be very cost effective. They have agile and flexible frameworks, with limited bureaucracy and fewer competing priorities. ISIs are best positioned to manage grants that are stand alone, can be administered with lower indirect rates, require little physical research infrastructure, and fund individuals partnering with collaborators at universities. While toxic academic environments often push women and minority groups out of universities and academia, agile ISIs can take swift and decisive action to construct healthier work environments that are more welcoming of non-traditional career trajectories. These qualities make ISIs great places for testing high-risk, novel ideas.
- Agreements to share library resources.
- Multi-institution consortia, including consortia established to serve specific regional missions. Here are examples of US consortia.
- Memoranda of understanding that formalize a variety of institutional-level collaborations, such as collaborations in which university-run Institutional Review Boards (IRB) for research ethics review serve as external IRBs for other types of entities.
- Cooperative Research & Development Agreements (CRADAs) providing avenues for non-federal parties to access the physical research infrastructure that exist at federal laboratories.
Congress allocates billions of dollars annually to Alzheimer’s research in hopes of finding an effective prophylactic, treatment, or cure. But these massive investments have little likelihood of paying off absent a game-changing improvement in our present knowledge of biology. Funds currently earmarked for Alzheimer’s research would be more productive if they were instead invested into deepening understanding of aging biology at the cell, tissue, and organ levels. Fundamental research advances in aging biology would directly support better outcomes for patients with Alzheimer’s as well as a plethora of other chronic diseases associated with aging — diseases that are the leading cause of mortality and disability, responsible for 71% of annual deaths worldwide and 79% of years lived with disability. Congress should allow the National Institute on Aging to spend funds currently restricted for research into Alzheimer’s specifically on research into aging biology more broadly. The result would be a society better prepared for the imminent health challenges of an aging population.
Challenge and Opportunity
The NIH estimates that 6.25 million Americans now have Alzheimer’s disease, and that due to an aging population, that number will more than double to 13.85 million by the year 2060. The Economist similarly estimates that an estimated 50 million people worldwide suffer dementia, and that that number will increase to 150 million by the year 2050. These dire statistics, along with astute political maneuvering by Alzheimer’s advocates, have led Congress to earmark billions of dollars of federal health-research funds for Alzheimer’s disease.
President Obama’s FY2014 and FY2015 budget requests explicitly cited the need for additional Alzheimer’s research at the National Institutes of Health (NIH). In FY2014, Congress responded by giving the NIH’s National Institute on Aging (NIA) a small but disproportionate increase in funding relative to other national institutes, “in recognition of the Alzheimer’s disease research initiative throughout NIH.” Congress’s explanatory statement for its FY2015 appropriations laid out good reasons not to earmark a specific portion of NIH funds for Alzheimer’s research, stating:
“In keeping with longstanding practice, the agreement does not recommend a specific amount of NIH funding for this purpose or for any other individual disease. Doing so would establish a dangerous precedent that could politicize the NIH peer review system. Nevertheless, in recognition that Alzheimer’s disease poses a serious threat to the Nation’s long-term health and economic stability, the agreement expects that a significant portion of the recommended increase for NIA should be directed to research on Alzheimer’s. The exact amount should be determined by scientific opportunity of additional research on this disease and the quality of grant applications that are submitted for Alzheimer’s relative to those submitted for other diseases.”
But this position changed suddenly in FY2016, when Congress earmarked $936 million for Alzheimer’s research. The amount earmarked by Congress for Alzheimer’s research has risen almost linearly every year since then, reaching $3.1 billion in FY2021 (Figure 1).
This tsunami of funding has been unprecedented for the NIA. The seemingly limitless availability of money for Alzheimer’s research has created a perverse incentive for the NIH and NIA to solicit additional Alzheimer’s funding, even as agencies struggle to deploy existing funding efficiently. The NIH Director’s latest report to Congress on Alzheimer’s funding suggests that with an additional $226 million per year in funding, the NIH and NIA could effectively treat or prevent Alzheimer’s disease and related dementias by 2025.
This is a laughable untruth. No cure for Alzheimer’s is in the offing. Progress on Alzheimer’s research is stalling and commercial interest is declining. Of the 413 Alzheimer’s clinical trials performed in the United States between 2002 and 2012, 99.6% failed. Recent federal investments seemed to be paying off when in 2021 the Food and Drug Administration (FDA) approved Aduhelm, the first new treatment for Alzheimer’s since 2003. But the approval was based on the surrogate endpoint of amyloid plaques in the brain as observed by PET scans, not on patient outcomes. In its first months on the market, Aduhelm visibly flopped. Scientists subsequently called on the FDA to withdraw marketing approval for the drug. If an effective treatment were likely by 2025, Big Pharma would be doubling down. But Pfizer announced it was abandoning Alzheimer’s research in 2018.
The upshot is clear: lavish funding on treatments and cures for a disease can only do so much absent knowledge of that disease’s underlying biological mechanisms. We as a society must resist the temptation to waste money on expensive shots in the dark, and instead invest strategically into understanding the basic biochemical and genetic mechanisms underlying aging processes at the cell, tissue, and organ levels.
Plan of Action
Aging is the number-one risk factor for Alzheimer’s disease, as it is for many other diseases. All projections of an increasing burden of Alzheimer’s are based on the fact that our society is getting older. And indeed, even if a miraculous cure for Alzheimer’s were to emerge, we would still have to contend with an impending onslaught of other impending medical and social costs.
Economists and scientists have estimated extending average life expectancy in the United States by one year is worth $38 trillion. But funding for basic research on aging remains tight. Outside of the NIA, several foundations in the United States are actively funding aging research: the American Federation for Aging Research (AFAR), The Glenn Foundation for Medical Research, and the SENS Foundation each contribute a few million per year for aging research. Privately funded fast grants have backed bold aging projects with an additional $26 million.
This relatively small investment in basic research has generated billions in private funding to commercialize findings. Startups raised $850 million in 2018 to target aging and age-related diseases. Google’s private research arm Calico is armed with billions and a pharmaceutical partner in Abbvie, and the Buck Institute’s Unity Biotechnology launched an initial public offering (IPO) in 2018. In 2021, Altos Labs raised hundreds of millions to commercialize cellular reprogramming technology. Such dynamism and progress in aging research contrasts markedly with the stagnation in Alzheimer’s research and indicates that the former is a more promising target for federal research dollars.
Now is the time for the NIA to drive science-first funding for the field of aging. Congress should maintain existing high funding levels at NIA, but this funding should no longer be earmarked solely for Alzheimer’s research. In every annual appropriation since FY2016, the House and Senate appropriations committees have issued a joint explanatory statement that has force of law and includes the Alzheimer’s earmark. These committees should revert to their FY2015 position against politically directing NIH funds towards particular ends. The past six years have shown such political direction to be a failed experiment.
Removing the Alzheimer’s earmark would allow the NIA to use its professional judgment to fund the most promising research into aging based on scientific opportunity and the quality of the grant applications it receives. We expect that this in turn would cause agency-funded research to flourish and stimulate further research and commercialization from industry, as privately funded aging research already has. Promising areas that the NIA could invest in include building tools for understanding molecular mechanisms of aging, establishing and validating aging biomarkers, and funding more early-stage clinical trials for promising drugs. By building a better understanding of aging biology, the NIA could do much to render even Alzheimer’s disease treatable.
In 2009, a private task force calling itself the Alzheimer’s Study Group released a report entitled “A National Alzheimer’s Strategic Plan.” The group, co-chaired by former Speaker of the House Newt Gingrich and former Nebraska Senator Bob Kerrey, called on Congress to immediately increase funding for Alzheimer’s and dementia research at the NIH by $1 billion per year.
In response to the report, Senators Susan Collins and Evan Bayh introduced the National Alzheimer’s Project Act (NAPA), which was signed into law in 2011 by Barack Obama. NAPA requires the Department of Health and Human Services to produce an annual assessment of the nation’s progress in preparing for an escalating burden of Alzheimer’s disease. This annual assessment is called the National Plan to Address Alzheimer’s Disease. The first National Plan, released in 2012, established a goal of effectively preventing or treating Alzheimer’s disease by 2025. In addition, the Alzheimer’s Accountability Act, which passed in the 2015 omnibus, gives the NIH director the right and the obligation to report directly to Congress on the amount of additional funds needed to meet the goals of the national plan, including the self-imposed 2025 goal.
Understanding diseases that progress over a long period of time such as Alzheimer’s requires complex clinical studies. Lessons learned from past research indicate that animal models don’t necessarily translate into humans when it comes to such diseases. Heterogeneity in disease presentation, imprecise clinical measures, relevance of target biomarkers, and difficulty in understanding underlying causes exacerbate the problem for Alzheimer’s specifically.
Alzheimer’s is also a whole-system, multifactorial disease. Dementia is associated with a decreased variety of gut microbiota. Getting cataract surgery seemingly reduces Alzheimer’s risk. Inflammatory responses from the immune system can aggravate neurodegenerative diseases. The blood-brain barrier uptakes less plasma protein with age. The list goes on. Understanding Alzheimer’s hence requires understanding of many other biological systems.
Alzheimer’s is named after Alois Alzheimer, a German scientist credited with publishing the first case of the disease in 1906. In the post-mortem brain sample of his patient, he identified extracellular deposits, now known as amyloid plaques, clumps of amyloid-beta (Aβ) protein. In 1991, David Allsop and John Hardy proposed the amyloid hypothesis after discovering a pathogenic mutation in the APP (Aβ precursor protein) gene on chromosome 21. Such a mutation led to increased Aβ deposits which present as early-onset Alzheimer’s disease in families.
The hypothesis suggested that Alzheimer’s follows the pathological cascade of Aβ aggregation → tau phosphorylation → neurofibrillary tangles → neuronal death. These results indicated that Aβ could be a drug target for Alzheimer’s disease.
In the 1990s, Elan Pharmaceuticals proposed a vaccine against Alzhiemer’s by stopping or slowing the formation of Aβ aggregates. It was a compelling idea. In the following decades, drug development centered around this hypothesis, leading to the current approaches to Alzhiemer’s treatment: Aβinhibition (β- and γ-secretase inhibitors), anti-aggregation (metal chelators), Aβ clearing (protease-activity regulating drugs), and immunotherapy.
In the last decade, the growing arsenal of Aβ therapies fueled the excitement that we were close to an Alzheimer’s treatment. The 2009 report, the 2012 national plan, and Obama’s funding requestsseemed to confirm that this was the case.
However, the strength of the amyloid hypothesis has declined since then. Since the shutdown of the first Alzheimer’s vaccine in 2002, numerous other pharmaceutical companies have tried and failed at creating their own vaccine, despite many promising assets shown to clear Aβ plaques in animal models. Monoclonal antibody treatments (of which aducanamab is an example) have reduced free plasma concentrations of Aβ by 90%, binding to all sorts of Aβ from monomeric and soluble Aβ to fibrillar and oligomeric Aβ. These treatments have suffered high-profile late-stage clinical trial failures in the last five years. Similar failures surround other approaches to Alzheimer’s drug development.
There is no doubt these therapies are successful at reducing Aβ concentration in pre-clinical trials. But combined with the continuous failure of these drugs in late-stage clinical trials, perhaps Aβ does not play as major a role in the mechanistic process as hypothesized.
Exclusionary zoning is damaging equity and inhibiting growth and opportunity in many parts of America. Though the Supreme Court struck down expressly racial zoning in 1917, many local governments persist with zoning that discriminates against low-wage families — including many families of color.1 Research shows that has connected such zoning to racial segregation, creating greater disparities in measurable outcomes.2
By contrast, real-world examples show that flexible zoning rules — rules that, for instance, that allow small groups to opt into higher housing density while bypassing veto players, or that permit some small areas to opt out of proposed zoning reforms — can promote housing fairness, supply, and sustainability. Yet bureaucratic and knowledge barriers inhibit broad implementation of such practices. To facilitate zoning reform, the Department of Housing and Urban Development should (i) draft model smarter zoning codes, (ii) fund efforts to evaluate the impact of smarter zoning practices, (iii) support smarter zoning pilot programs at the state and local levels, and (iv) coordinate with other federal programs and agencies on a whole-of-government approach to promote smarter zoning.
Challenge and Opportunity
Economists across the political spectrum agree that restrictive zoning laws banning inclusive, climate-friendly, multi-family housing have made housing less affordable, increased racial segregation and damaged the environment. Better zoning would enable fairer housing outcomes and boost growth across America.
The Biden-Harris administration is actively working to eliminate exclusionary zoning in order to advance the administration’s priorities of racial justice, respect for working-class people, and national unity. But in many states with unaffordable housing, local politics have made zoning reform painfully slow and/or precarious. In California, for instance, zoning-reform activists have garnered significant victories. But a recently launched petition to limit state power over zoning might undo some of the progress made so far. There is an urgent need for strategies to overcome political gridlock limiting or inhibiting zoning reform at the state and local levels.
Fortunately, a suite of new smarter zoning techniques can achieve needed reforms while alleviating political concerns. Consider Houston, TX, which faced resistance in reducing suburban minimum lot sizes to allow more housing. To overcome political obstacles, the city gave individual streets and blocks the option to opt out of the proposed reform. That simple technique reduced resistance and allowed the zoning measure to pass. The powerful incentives from increased land value meant that although opt outs reached nearly 50% in one neighborhood, they were rare in many others.3 The American Planning Association similarly published a proposal to allow opt-ins for upzoning at a street-by-street level — a practice that would allow small groups to bypassing those who currently block reform in order capture the huge incentives of upzoning.
In fact, opt-ins and opt-outs are proven methods of overcoming political obstacles in other policy fields, including parking reform and “play streets” in urban policy. Opt-ins and opt-outs reduce officials’ and politicians’ concerns that a vocal and unrepresentative group will blame them for reforms. While reformers may fear that allowing exemptions may weaken zoning reforms, the enormous increase in land value created by upzoning in unaffordable areas provides powerful incentives for small groups of homeowners to choose upzoning of their own lots. And by offering a pathway to circumvent opposition, flexible smarter zoning reforms can expedite construction of abundant new affordable housing that substantially improves equity, opportunity, and quality of life for working-class Americans.
Absent action by HUD to encourage trials of innovative techniques, the pace of reform will continue to be much slower than it needs to be. Campaigners at state and local government level will continue to face opposition and setbacks. The pace of growth and innovation will be damaged, as bad zoning continues to block the benefits of mobility and opportunity. And disadvantaged minorities will continue to suffer the most from unjust and exclusionary zoning rules.xc
Plan of Action
The Department of Housing and Urban Development (HUD) should take the following steps to facilitate zoning reform in the United States:
1. Create a model Smarter Zoning Code
HUD’s Office of Policy Development and Research, working with the Environmental Protection Agency (EPA)’s Office of Community Revitalization, should produce a model Smarter Zoning Code that state and local governments can adopt and adapt. The Smarter Zoning Code would provide a variety of options for state and local governments to minimize backlash against zoning reforms by reducing effects on other streets or blocks. Options could include:4
- Allowing a street or block to opt-in to upzoning by filing a verified petition signed by a qualified majority of the registered voters residing on that street or block.
- If the petition is filed by the residents of a block of houses surrounded by streets, development pursuant to the upzoning should be required to leave untouched the fronts of the houses facing those streets (to minimize impact on residents whose lots are not included in the upzoning).
- Residents can be given the option to attach a design code to their petition.
- Anti-displacement rules. Although most development through smarter zoning will likely happen in neighborhoods dominated by owner-occupied single-family homes, all resident renters should be protected by rules that preserve existing anti-eviction and rent-control provisions. Rules should additionally ensure that no development pursuant to smarter zoning can proceed unless renters are protected, and should include provisions to prevent evasion by landlords.5
- Height restrictions and angled light planes to protect sunlight to other blocks.
- Setback rules that can be waived by adjacent homeowners to allow development of townhouses or multifamily units.
- Compensation payable by a developer to adjoining residents who are adversely affected by development permitted under zoning reform.
- Establishment of controlled parking districts surrounding a street or block that votes to upzone, with free parking stickers issued to residents of adjoining streets to protect their parking access.
- Impact fees, tax increment local transfers6, community-benefit agreements, or other methods to address spillover effects of new developments.
- Where appropriate, provisions to allow each local government to mitigate the scale of change. For example, local governments could limit opt-in upzoning to no more than four floors of housing in areas that are currently zoned exclusively for single-family homes.
A draft of a model Smarter Zoning Code could be developed for $1 million and could be tested by seeking views from a range of stakeholders for $5 million. The model code should be highlighted in HUD’s Regulatory Barriers Clearinghouse.
2. Collect and showcase evidence on effectiveness and impacts of smarter zoning practices
As part of the list of policy-relevant questions in its systematic plan under the Foundations for Evidence-Based Policymaking Act of 20187, HUD should include the question of which types of zoning approaches, including smarter zoning, can best (i) help to address or overcome political and other barriers to meeting fair-housing standards, and (ii) support plentiful supplies of affordable housing to address equity and other issues.
HUD should also provide research grants under the Unlocking Possibilities Program8, once passed, to evaluate the impact of Smarter Zoning techniques, suggest improvements to the model Smarter Zoning Code, and prepare and showcase successful case studies of flexible zoning.
Finally, demonstrated thought leadership by the Biden-Harris Administration could kickstart a new wave of innovation in smarter zoning that helps address historic equity issues. HUD should work with the White House and key stakeholder groups (e.g., the American Planning Association, the National League of Cities, the National Governors’ Association) to host a widely publicized event on Planning for Opportunity and Growth. The event would showcase proven, innovative zoning practices that can help state and local government representatives meet housing and growth objectives.
3. Launch smarter-zoning pilot projects
Subject to funding through the Unlocking Possibilities Program, the HUD Secretary should direct HUD’s Office of Technical Assistance and Management to launch a collection of pilot projects for the implementation of the model Smarter Zoning Code. Specifically, HUD would provide planning grants to help states, local governments, and potentially other groups improve skills and technical capacity needed to implement or promote Smarter Zoning reforms. The technical assistance to help a local government adopt smarter zoning, where possible under existing state law, should cost less than $100,000; technical assistance for a state to enable smarter zoning on a state-wide basis should cost less than $500,000.
4. Promote federal incentives and coordination around smarter zoning
Model codes, evidence-based practices, and planning grants can help advance upzoning in areas that are already interested. The federal government could also provide stronger incentives to encourage more reluctant areas to adopt smarter zoning. It is lawful to condition a portion of federal funds upon criteria that are “directly related to one of the main purposes for which [such funds] are expended”, so long as the financial inducement is not “so coercive as to pass the point at which ‘pressure turns into compulsion’”.9 For instance, one of the purposes of highway funds is to reduce congestion in interstate traffic. Failure to allow walkable urban densification limits the opportunities for travel other than by car, which in turn increases congestion on federal highways. It would therefore be constitutional for the federal government to withhold 5% of federal highway funds from states that do not enact smarter zoning provisions. Similarly, funding for affordable home care proposed under the Build Back Better Act will be less effective in areas where exclusionary zoning makes it less affordable for carers to live. A portion of such funding could be withheld from states that do not pass smarter zoning laws. Similar action could be taken on federal funds for education, where unaffordable housing affects the supply of teachers, and on federal funds to fight climate change, because sprawl driven by single-family zoning increases carbon emissions.
HUD’s Office of Fair Housing and Equal Opportunity should consult with other federal bodies on what federal funding can be made conditional upon participation by state and local governments in smarter zoning programs, as well as on when implementing such conditions would require Congressional approval. HUD should similarly consult with other federal bodies on creative opportunities to incentivize smarter zoning through existing programs. If Congress does not wish to amend the law, it may be possible for other agencies to condition funding upon implementation of smarter zoning provisions at state or local level. Although smarter zoning will also benefit existing residents, billions of dollars of incentives may be needed for the most reluctant states and local governments to overcome existing veto players to get more equitable zoning.
Urgent reform is needed to address historic damage caused to equity by zoning rules, originally explicitly racist in language, that remain economically exclusionary in intent and racially discriminatory in impact. By modeling smarter zoning practices, demonstrating their benefits, providing financial and technical assistance for implementation, and conditioning federal funding upon adoption, HUD can accelerate and expand adoption of beneficial flexible zoning reforms nationwide.
Many proposed zoning reforms that, if implemented, would go the furthest to improve equity and provision of fair housing have encountered considerable political challenges in areas where exclusionary zoning is most prevalent and damaging. Flexible zoning reforms may have apparently less sweeping impacts than traditional zoning reforms, but are also far more feasible in practice. Providing additional ideas to help overcome those political barriers may be a powerful way to unlock improvements in equity.
To be clear, there is no suggestion to give small groups the power to opt into zoning that is more restrictive than current rules. Flexible zoning reform can often be more powerful than traditional zoning reform. Members of the Squamish Nation recently demonstrated the enormous power of economic incentives to upzone when 87% voted to approve the construction of 6,000 new homes on their territory. Similarly, a large fraction of the residents of Houston — recognizing that upzoning could make their properties more valuable — did not choose to opt their blocks out of recent zoning reform. Incentives for apartment owners to vote for redevelopment under the TAMA 38 scheme in Israel accounted for 35% of the new homes built in Tel Aviv in 2020.
If no individual landowners wanted to gain the economic benefits of being permitted to develop their lots, there would be no demand from others for zoning rules to stop development from proceeding. Most existing processes governing upzoning give disproportionate weight to the opinions of vocal but unrepresentative groups who want no change, even in areas where a large majority would otherwise support reform. Direct democracy at very small scales can let small groups of residents bypass those veto players and capture the economic benefits of allowing more housing.
Many state and local leaders are aware of the enormous equity and growth benefits that better, more inclusionary zoning can deliver. However, such leaders are often frustrated by political and public resistance to simple upzoning attempted via traditional zoning processes. Smarter zoning techniques can allow upzoning to proceed in the many blocks and streets where it is popular, without being frustrated by the resistance from the few residents among whom it is not.
Smarter zoning proposals are designed to supplement and assist traditional zoning reforms, not replace them. “Opt-in” zoning mechanisms are designed to allow opt-ins only to more equitable upzoning, not to more exclusionary zoning, so they cannot make matters worse. Similarly, “opt-out” mechanisms only apply where the promoters of an ambitious new pro-equity reform want a way to overcome strong political resistance to that specific reform.
Another objection is that smarter zoning might be seen to perpetuate local zoning control. But existing local zoning processes are structured to block change and empower local veto players. By contrast, smarter zoning techniques are designed so that groups who wish to capture the economic benefits of upzoning can use direct democracy to bypass existing veto players, in a way that has proven successful in other fields. Where smarter zoning is imposed by state law, it can hardly be said to be entrenching local control. And in any case, existing state powers to override local zoning will remain, as will the potential for future federal action on zoning.
Not if designed correctly. As explained above, smarter zoning codes can and should include strong provisions to protect renters.
An initial draft of a model Smarter Zoning Code could likely be produced within three months. Testing with stakeholders should take no more than six months, meaning that a final code could be published by HUD within one year of the effort beginning.
- Officials wedded to traditional zoning processes may not wish to try innovative methods to improve equity, but smarter zoning proposals have been published by the American Planning Association and have little risk of harm.
- Resistance will arise from some residents of areas with exclusionary zoning. However, such resistance will be less than the resistance to universal upzoning mandates. And this resistance will be counterbalanced and often outweighed by the support of the many residents drawn by the economic benefits of upzoning for them and their families.
- Advocates of aggressive zoning reform may complain that smarter zoning is not sufficiently assertive. One response to this objection is that federal powers to impose such upzoning are highly constrained by political gridlock and partisanship. Smarter zoning is a politically feasible way to advance equitable zoning in the near term, while the campaign for broader national zoning reform continues in the long term.
The United States should establish a testbed for government-procured artificial intelligence (AI) models used to provide services to U.S. citizens. At present, the United States lacks a uniform method or infrastructure to ensure that AI systems are secure and robust. Creating a standardized testing and evaluation scheme for every type of model and all its use cases is an extremely challenging goal. Consequently, unanticipated ill effects of AI models deployed in real-world applications have proliferated, from radicalization on social media platforms to discrimination in the criminal justice system. Increased interest in integrating emerging technologies into U.S. government processes raises additional concerns about the robustness and security of AI systems.
Establishing a designated federal AI testbed is an important part of alleviating these concerns. Such a testbed will help AI researchers and developers better understand how to construct testing methods and ultimately build safer, more reliable AI models. Without this capacity, U.S. agencies risk perpetuating existing structural inequities as well as creating new government systems based on insecure AI systems — both outcomes that could harm millions of Americans while undermining the missions that federal agencies are entrusted to pursue.
Graduate students are more likely to persist in their academic decisions if engaged in positive mentoring experiences. Graduate students also cite positive mentoring experiences as the most important factor in completing a Science, Technology, Engineering, Math, or Medicine (STEMM) degree. In the United States, though, these benefits are often undermined by a research ecosystem that ties mentorship and training of graduate students by Principal Investigators (PIs) to funding in the form of research assistantships. Such arrangements often lead to unreasonable work expectations, toxic work environments, and poor mentor-mentee relationships.
To improve research productivity, empower predoctoral researchers to achieve their career goals, and increase the intellectual freedom that young scientists need to pursue productively disruptive scholarship, we recommend that federal science funding agencies:
1. Establish traineeship grant programs at all federal science funding agencies.
2. Require every PI receiving a federal research grant to implement an Individual Development Plan (IDP) for each student funded by that grant.
3. Require every university receiving federal training grants to create a plan for how it will provide mentorship training to faculty, and to actively consider student mentorship as part of faculty promotion, reappointment, and tenure processes.
4. Direct and fund federal science agencies to build professional development networks and create other training opportunities to help more PIs learn best practices for mentorship.
The computational revolution enables and requires an ambitious reimagining of public high-school and community-college designs, curricula, and educator-training programs. In light of a much-changed — and much-changing — society, we as a nation must revisit basic assumptions about what constitutes a “good” education. That means re-considering whether traditional school schedules still make sense, updating outdated curricula to emphasize in-demand skills (like computer programming), bringing current perspectives to old subjects (like computational biology); and piloting new pedagogies (like project-based approaches) better aligned to modern workplaces. To do this, the Federal Government should establish a system of National Laboratory Schools in parallel to its existing system of Federally Funded Research & Development Centers (FFRDCs).
The National Science Foundation (NSF) should lead this work, partnering with the Department of Education (ED) to create a Division for School Invention (DSI) within its Technology, Innovation, and Partnerships (TIP) Directorate. The DSI would act as a platform analogous to the Small Business Innovation Research (SBIR) program, catalyzing Laboratory Schools by providing funding and technical guidance to federal, state, and local entities pursuing educational or cluster-based workforce-development initiatives.
The new Laboratory Schools would take inspiration from successful, vertically-integrated research and design institutes like Xerox PARC and the Mayo Clinic in how they organized research, as well as from educational systems like Governor’s Schools and Early College High Schools in how they organized their governance. Each Laboratory School would work with a small, demographically and academically representative cohort financially sustainable on local per-capita education budgets.
Collectively, National Laboratory Schools would offer much-needed “public sandboxes” to develop and demonstrate novel school designs, curricula, and educator-training programs rethinking both what and how people learn in a computational future.
Challenge and Opportunity
Education is fundamental to individual liberty and national competitiveness. But the United States’ investment in advancing the state of the art is falling behind.
Innovation in educational practice has been incremental. Neither the standards-based nor charter-school movements departed significantly from traditional models. Accountability and outcomes-based incentives like No Child Left Behind suffer from the same issue.
The situation in research is not much better: NSF and ED’s combined spending on education research is barely twice the research and development budget of Nintendo. And most of that research focuses on refining traditional school models (e.g. presuming 50-minute classes and traditional course sequences).
Despite all these efforts, we are still seeing unprecedented declines in students’ math and reading scores.
Meanwhile, the computational revolution is widening the gap between what school teaches and the skills needed in a world where work is increasingly creative, collaborative, and computational. Computation’s role in culture, commerce, and national security is rapidly expanding; computational approaches are transforming disciplines from math and physics to history and art. School can’t keep up.
For years, research has told us individualized, competency- and project-based approaches can reverse academic declines while aligning with the demands of industry and academia for critical thinking, collaboration, and creative problem-solving skills. But schools lack the capacity to follow suit.
Clearly, we need a different approach to research and development in education: We need prototypes, not publications. While studies evaluating and improving existing schools and approaches have their place, there is a real need now for “living laboratories” that develop and demonstrate wholly transformative educational approaches.
Schools cannot do this on their own. Constitutionally and financially, education is federated to states and districts. No single public actor has the incentives, expertise, and resources to tackle ambitious research and design — much less to translate into research to practice on a meaningful scale. Private actors like curriculum developers or educational technologists sell to public actors, meaning private sector innovation is constrained by public school models. Graduate schools of education won’t take the brand risk of running their own schools, and researchers won’t pursue unfunded or unpublishable questions. We commend the Biden-Harris administration’s Multi-Agency Research and Development Priorities for centering inclusive innovation and science, technology, education, and math (STEM) education in the nation’s policy agenda. But reinventing school requires a new kind of research institution, one which actually operates a school, developing educators and new approaches firsthand.Luckily, the United States largely invented the modern research institution. It is time we do so again. Much as our nation’s leadership in science and technology was propelled by the establishment ofland-grant universities in the late 19th century, we can trigger a new era of U.S. leadership in education by establishing a system of National Laboratory Schools. The Laboratory Schools will serve as vertically integrated “sandboxes” built atop fully functioning high schools and community colleges, reinventing how students learn and how we develop in a computational future.
Plan of Action
To catalyze a system of National Laboratory Schools, the NSF should establish a Division for School Invention (DSI) within its Technology, Innovation, and Partnerships (TIP) directorate. With an annually escalating investment over five years (starting at $25 million in FY22 and increasing to $400 million by FY26), the DSI could support development of 100 Laboratory Schools nationwide.
The DSI would support federal, state, and local entities — and their partners — in pursuing education or cluster-based workforce-development initiatives that (i) center computational capacities, (ii) emphasize economic inclusion or racial diversity, and (iii) could benefit from a high-school or community-college component.
DSI support would entail:
- Competitive matching grants modeled on SBIR grants. These grants would go towards launching Laboratory Schools and sustaining those that demonstrate success.
- Technical guidance to help Laboratory Schools (i) innovate while maintaining regulatory compliance, and (ii) develop financial models workable on local education budgets.
- Accreditation support, working with partner executives (e.g., Chairs of Boards of Higher Education) where appropriate, to help Laboratory Schools establish relationships with accreditors, explain their educational models, and document teacher and student work for evaluation purposes.
- Responsible-research support, including providing Laboratory Schools assistance with obtainingFederalwide Assurance (FWA) and access to partners’ Institutional Review Boards (IRBs).
- Convening and storytelling, raising awareness of and interest in Laboratory Schools’ mission and operations.
Launching at least ten National Laboratory Schools by FY23 would involve three primary steps. First, the White House Office of Science and Technology Policy (OSTP) should convene an expert group comprised of (i) funders with a track record of attempting radical change in education and (ii) computational domain experts to design an evaluation process for the DSI’s competitive grants, secure industry and academic partners to help generate interest in the National Laboratory School System, and recruit the DSI’s first Director.
In parallel, Congress should issue one appropriations report asking NSF to establish a $25 million per year pilot Laboratory School program aligned with the NSF Directorate for Technology, Innovation, and Partnerships (TIP)’s Regional Innovation Accelerators (RIA)’s Areas of Investment. Congress should issue a second appropriations report asking the Office of Elementary and Secondary Education (OESE) to release a Dear Colleague letter encouraging states that have spent less than 75% of their Elementary and Secondary School Emergency Relief (ESSER) or American Recovery Plan funding to propose a Laboratory School.
Finally, the White House should work closely with the DSI’s first Director to convene the Department of Defense Education Activity (DDoEA) and National Governors Association (NGA) to recruit partners for the National Laboratory Schools program. These partners would later be responsible for operational details like:
- Vetting or establishing an independent, state-level organization to receive federal funding and act as the primary liaison to the DSI.
- Giving the organization the matching funds needed to access DSI funding.
- Ensuring that the organization maintains a board that includes at least one community-college leader, two youth workers or high-school leaders, one representative from the state department of education, and two computation domain experts (one from industry and one from academia). Board size should not exceed eight members.
- Providing DSI with the necessary access and support to ensure that appropriate and sufficient data are collected for evaluation and learning purposes.
- Partnering with philanthropic actors to fund competitive grant programs that ultimately incentivize district and charter schools to adopt and adapt successful curricula and models developed by Laboratory Schools.
Focus will be key for this initiative. The DSI should exclusively support efforts that center:
- New public schools, not programs within (or reinventions of) existing schools.
- Radically different designs, not incremental evolutions.
- Computationally rich models that integrate computation and other modern skills into all subjects.
- Inclusive innovation focused on transforming outcomes for the poor and historically marginalized.
Imagine the pencil has just been invented, and we treated it the way we’ve treated computers in education. “Pencil class” and “pencil labs” would prepare people for a written future. We would debate the cost and benefit of one pencil per child. We would study how oral test performance changed when introducing one pencil per classroom, or after an after-school creative-writing program.
This all sounds stupid because the pencil and writing are integrated throughout our educational systems rather than being considered individually. The pencil transforms both what and how we learn, but only when embraced as a foundational piece of the educational experience.
Yet this siloed approach is precisely the approach our educational system takes to computers and the computational revolution. In some ways, this is no great surprise. The federated U.S. school system isn’t designed to support invention, and research incentives favor studying and suggesting incremental improvements to existing school systems rather than reimagining education from the ground up. If we as a nation want to lead on education in the same way that we lead on science and technology, we must create laboratories to support school experimentation in the same way that we establish laboratories to support experimentation across STEM fields. Certainly, the federal government shouldn’t run our schools. But just as the National Institutes of Health (NIH) support cutting-edge research that informs evolving healthcare practices, so too should the federal government support cutting-edge research that informs evolving educational practices. By establishing a National Laboratory School system, the federal government will take the risk and make the investments our communities can’t on their own to realize a vision of an equitable, computationally rich future for our schools and students.
Frequently Asked Questions
1. Why is the federal government the right entity to lead on a National Laboratory School system?
Transformative education research is slow (human development takes a long time, as does assessing how a given intervention changes outcomes), laborious (securing permissions to test an intervention in a real-world setting is often difficult), and resource-intensive (many ambitious ideas require running a redesigned school to explore properly). When other fields confront such obstacles, the public and philanthropic sectors step in to subsidize research (e.g., by funding large research facilities). But tangible education-research infrastructure does not exist in the United States.
Without R&D demonstrating new models (and solving the myriad problems of actual implementation), other public- and private-sector actors will continue to invest solely in supporting existing school models. No private sector actor will create a product for schools that don’t exist, no district has the bandwidth and resources to do it themselves, no state is incentivized to tackle the problem, and no philanthropic actor will fund an effort with a long, unclear path to adoption and prominence.
National Laboratory Schools are intended primarily as research, development, and demonstration efforts, meaning that they will be staffed largely by researchers and will pursue research agendas that go beyond the traditional responsibilities and expertise of local school districts. State and local actors are the right entities to design and operate these schools so that they reflect the particular priorities and strengths of local communities, and so that each school is well positioned to influence local practice. But funding and overseeing the National Laboratory School system as a whole is an appropriate role for the federal government.
2. Why is NSF the right agency to lead this work?
For many years, NSF has developed substantial expertise funding innovation through the SBIR/STTR programs, which award staged grants to support innovation and technology transfer. NSF also has experience researching education through its Directorate for Education and Human Resources (HER). Finally, NSF’s new Directorate for Technology, Innovation, and Partnerships (TIP) has a mandate to “[create] education pathways for every American to pursue new, high-wage, good-quality jobs, supporting a diverse workforce of researchers, practitioners, and entrepreneurs.” NSF is the right agency to lead the National Laboratory Schools program because of its unique combination of experience, in-house expertise, mission relevance, and relationships with agencies, industry, and academia.
3. What role will OSTP play in establishing the National Laboratory School program? Why should they help lead the program instead of ED?
ED focuses on the concerns and priorities of existing schools. Ensuring that National Laboratory Schools emphasize invention and reimagining of educational models requires fresh strategic thinking and partnerships grounded in computational domain expertise.
OSTP has access to bodies like the President’s Council of Advisors on Science and Technology (PCAST)and the National Science and Technology Council (NSTC). Working with these bodies, OSTP can easily convene high-profile leaders in computation from industry and academia to publicize and support the National Laboratory Schools program. OSTP can also enlist domain experts who can act as advisors evaluating and critiquing the depth of computational work developed in the Laboratory Schools. And annually, in the spirit of the White House Science Fair, OSTP could host a festival showcasing the design, practices, and outputs of various Laboratory Schools.
Though OSTP and NSF will have primary leadership responsibilities for the National Laboratory Schools program, we expect that ED will still be involved as a key partner on topics aligned with ED’s core competencies (e.g., regulatory compliance, traditional best practices, responsible research practices, etc.).
4. What makes the Department of Defense Education Activity (DoDEA) an especially good partner for this work?
The DoDEA is an especially good partner because it is the only federal agency that already operates schools; reaches a student base that is large (more than 70,000 students, of whom more than 12,000 are high-school aged) as well as academically, socioeconomically, and demographically diverse; more nimble than a traditional district; in a position to appreciate and understand the full ramifications of the computational revolution; and very motivated to improve school quality and reduce turnover
5. Why should the Division for School Invention (DSI) be situated within NSF’s TIP Directorate rather than EHR Directorate?
EHR has historically focused on the important work of researching (and to some extent, improving) existing schools. The DSI’s focus on invention, secondary/postsecondary education, and opportunities for alignment between cluster-based workforce-development strategies and Laboratory Schools’ computational emphasis make the DSI a much better fit for the TIP, which is not only focused on innovation and invention overall, but is also explicitly tasked with “[creating] education pathways for every American to pursue new, high-wage, good-quality jobs, supporting a diverse workforce of researchers, practitioners, and entrepreneurs.” Situating the DSI within TIP will not preclude DSI from drawing on EHR’s considerable expertise when needed, especially for evaluating, contextualizing, and supporting the research agendas of Laboratory Schools.
6. Why shouldn’t existing public schools be eligible to serve as Laboratory Schools?
Most attempts at organizational change fail. Invention requires starting fresh. Allowing existing public schools or districts to launch Laboratory Schools will distract from the ongoing educational missions of those schools and is unlikely to lead to effective invention.
7. Who are some appropriate partners for the National Laboratory School program?
Possible partners include:
- A federal, state, or local agency that already sponsors a workforce-development initiative pursuing a cluster-based strategy: i.e., an initiative that might benefit from a Laboratory School as part of an attempt to address, for instance, talent-pipeline challenges.
- A federal, state, or local agency that already sponsors an education-innovation initiative: e.g., a state public university that may wish to establish a National Laboratory School as a foundation for a forward-looking graduate school of education.
- A state department of education seeking to lead by example to incubate local educational innovation.
- A local school district committed to educational transformation that is interested in supporting a National Laboratory School and intentionally transplanting its practices into district schools over time.
8. What should the profile of a team or organization starting a Laboratory School look like? Where and how will partners find these people?
At a minimum, the team should have experience working with youth, possess domain expertise in computation, be comfortable supporting both technical and expressive applications of computation, and have a clear vision for the practical operation of their proposed educational model across both the humanities and technical fields.
Ideally, the team should also have piloted versions of their proposed educational model approach in some form, such as through after-school programs or at a summer camp. Piloting novel educational models can be hard, so the DSI and/or its partners may want to consider providing tiered grants to support this kind of prototyping and develop a pipeline of candidates for running a Laboratory School.
To identify candidates to launch and operate a Laboratory School, the DSI and/or its partners can:
- Partner with philanthropists in education to tap into preexisting networks.
- Pursue a basic press and messaging strategy through op-eds in relevant publications.
- Host well-publicized competitions (akin to the XQ Super School competition) to encourage development of novel educational models.
- Partner with graduate schools of education and departments of computer science to identify candidates and advertise the opportunity.
1. What is computational thinking, and how is it different from programming or computer science?
A good way to answer this question is to consider writing as an analogy. Writing is a tool for thought that can be used to think critically, persuade, illustrate, and so on. Becoming a skilled writer starts with learning the alphabet and basic grammar, and can include craft elements like penmanship. But the practice of writing is distinct from the thinking one does with those skills. Similarly, programming is analogous to mechanical writing skills, while computer science is analogous to the broader field of linguistics. These are valuable skills, but are a very particular slice of what the computational revolution entails.
Both programming and computer science are distinct from computational thinking. Computational thinking refers to thinking with computers, rather than thinking about how to communicate problems and questions and models to computers. Examples in other fields include:
- In math: computational approaches to algebra can embrace parallels between variables in mathematics and variables in programming, shifting the focus of algebra classes from symbolic manipulation—which computers are good at—to conceptual understanding. Other subjects normally considered “advanced”—like discrete mathematics and linear algebra—are made accessible to a much broader audience by computational approaches. But current algebra curricula emphasize ideas and approaches appropriate to pencil, paper, and blackboard.
- In life sciences: computational approaches have transformed the practice of biomedical science and research with machine learning accelerating this trend, creating whole fields like bioinformatics and systems biology. But the content and form of biology class in high school remains largely unchanged.
- In economics and social sciences: increasing data availability is transforming these fields alongside increasingly sophisticated computational approaches for evaluating and making sense of these data—including approaches that center computational and mathematical constructs like graphs to represent social networks. And still, social studies, psychology, US history and government classes all look much as they did twenty years ago.
These transitions each involve programming, but are no more “about” computer science than a philosophy class is “about” writing. Programming is the tool, not the topic.
2. What are some examples of the research questions that National Laboratory Schools would investigate?
There are countless research agendas that could be pursued through this new infrastructure. Select examples include:
- Seymour Papert’s work on LOGO (captured in books like Mindstorms) presented a radically different vision for the potential and role for technology in learning. In Mindstorms, Papert sketches out that vision vis a vis geometry as an existence proof. Papert’s work demonstrates that research into making things more learnable differs from researching how to teach more effectively. Abelson and diSessa’s Turtle Geometry takes Papert’s work further, conceiving of ways that computational tools can be used to introduce differential geometry and topology to middle- and high-schoolers. The National Laboratory Schools could investigate how we might design integrated curricula combining geometry, physics, and mathematics by leveraging the fact that the vast majority of mathematical ideas tackled in secondary contexts appear in computational treatments of shape and motion.
- The Picturing to Learn program demonstrated remarkable results in helping staff to identify and students to articulate conceptions and misconceptions. The National Laboratory Schools could investigate how to take advantage of the explosion of interactive and dynamic media now available for visually thinking and animating mental models across disciplines.
- Bond graphs as a representation of physical dynamic systems were developed in the 1960s. These graphs enabled identification of “effort” and “flow” variables as new ways of defining power. This in turn allowed us to formalize analogies across electricity and magnetism, mechanics, fluid dynamics, and so on. Decades later, category theory has brought additional mathematical tools to bear on further formalizing these analogies. Given the role of analogy in learning, how could we reconceive people’s introduction to natural sciences in cross-disciplinary language emphasizing these formal parallels.
- Understanding what it means for one thing to cause (or not cause) another, and how we attempt to establish whether this is empirically true is an urgent and omnipresent need. Computational approaches have transformed economics and the social sciences: Whether COVID vaccine reliability, claims of election fraud, or the replication crisis in medicine and social science, our world is full of increasingly opaque systems and phenomena which our media environment is decreasingly equipped to tackle for and with us. An important tool in this work is the ability to reason about and evaluate empirical research effectively, which in turn depends on fundamental ideas about causality and how to evaluate the strength and likelihood of various claims. Graphical methods in statistics offer a new tool complementing traditional, easily misused ideas like p-values which dominate current introductions to statistics without leaving youth in a better position to meaningfully evaluate and understand statistical inference.
The specifics of these are less important than the fact that there are many, many such agendas that go largely unexplored because we lack the tangible infrastructure to set ambitious, computationally sophisticated educational research agendas.
3. How will the National Laboratory Schools differ from magnet schools for those interested in computer science?
The premise of the National Laboratory Schools is that computation, like writing, can transform many subjects. These schools won’t place disproportionate emphasis on the field of computer science, but rather will emphasize integration of computational thinking into all disciplines—and educational practice as a whole. Moreover, magnet schools often use selective enrollment in their admissions. National Laboratory Schools are public schools interested in the core issues of the median public school, and therefore it is important they tackle the full range of challenges and opportunities that public schools face. This involves enrolling a socioeconomically, demographically, and academically diverse group of youth.
4. How will the National Laboratory Schools differ from the Institute for Education Science’s Regional Education Laboratories?
The Institute for Education’s (IES’s) Regional Education Laboratories (RELs) do not operate schools. Instead, they convene and partner with local policymakers to lead applied research and development, often focused on actionable best practices for today’s schools (as exemplified by the What Works Clearinghouse). This is a valuable service for educators and policymakers. However, this service is by definition limited to existing school models and assumptions about education. It does not attempt to pioneer new school models or curricula.
5. How will the National Laboratory Schools program differ from tech-focused workforce-development initiatives, coding bootcamps, and similar programs?
These types of programs focus on the training and placement of software engineers, data scientists, user-experience designers, and similar tech professionals. But just as computational thinking is broader than just programming, the National Laboratory Schools program is broader than vocational training (important as that may be). The National Laboratory Schools program is about rethinking school in light of the computational revolution’s effect on all subjects, as well as its effects on how school could or should operate. An increased sensitivity to vocational opportunities in software is only a small piece of that.
6. Can computation really change classes other than math and science?
Yes. The easiest way to prove this is to consider how professional practice of non-STEM fields has been transformed by computation. In economics, the role of data has become increasingly prominent in both research and decision making. Data-driven approaches have similarly transformed social science, while also expanding the field’s remit to include specifically online, computational phenomena (like social networks). Politics is increasingly dominated by technological questions, such as hacking and election interference. 3D modeling, animation, computational art, and electronic music are just a few examples of the computational revolution in the arts. In English and language arts, multimedia forms of narrative and commentary (e.g., podcasts, audiobooks, YouTube channels, social media, etc.) are augmenting traditional books, essays, and poems.
7. Why and how should National Laboratory Schools commit to financial and legal parity with public schools?
The challenges facing public schools are not purely pedagogical. Public schools face challenges in serving diverse populations in resource-constrained and highly regulated environments. Solutions and innovation in education need to be prototyped in realistic model systems. Hence the National Laboratory Schools must commit to financial and legal parity with public schools. At a minimum, this should include a commitment to (i) a per-capita student cost that is no more than twice the average of the relevant catchment area for a given National Laboratory School (the 2x buffer is provided to accommodate the inevitably higher cost of prototyping educational practices at a small scale), and (ii) enrollment that is demographically and academically representative (including special-education and English Language Learner participation) of a similarly aged population within thirty minutes’ commute, and that is enrolled through a weighted lottery or similarly non-selective admissions process.
8. Why are Xerox PARC and the Mayo Clinic good models for this initiative?
Both Xerox PARC and the Mayo Clinic are prototypical examples of hyper-creative, highly-functioning research and development laboratories. Key to their success inventing the future was living it themselves.
PARC researchers insisted on not only building but using their creations as their main computing systems. In doing so, they were able to invent everything from ethernet and the laser printer to the whole paradigm of personal computing (including peripherals like the modern mouse and features like windowed applications that we take for granted today).
The Mayo Clinic runs an actual hospital. This allows the clinic to innovate freely in everything from management to medicine. As a result, the clinic created the first multi-specialty group practice and integrated medical record system, invented the oxygen mask and G-suit, discovered cortisone, and performed the first hip replacement.
One characteristic these two institutions share is that they are focused on applied design research rather than basic science. PARC combined basic innovations in microelectronics and user interface to realize a vision of personal computing. Mayo rethinks how to organize and capitalize on medical expertise to invent new workflows, devices, and more.
These kinds of living laboratories are informed by what happens outside their walls but are focused on inventing new things within. National Laboratory Schools should similarly strive to demonstrate the future in real-world operation.
1. Don’t laboratory schools already exist? Like at the University of Chicago?
Yes. But there are very few of them, and almost all of those that do exist suffer from one or more issues relative to the vision proposed herein for National Laboratory Schools. First, most existing laboratory schools are not public. In fact, most university-affiliated laboratory schools have, over time, evolved to mainly serve faculty’s children. This means that their enrollment is not socioeconomically, demographically, or academically representative. It also means that families’ risk aversion may constrain those schools’ capacity to truly innovate. Most laboratory schools not affiliated with a university use their “laboratory” status as a brand differentiator in the progressive independent-school sector.
Second, the research functions of many laboratory schools have been hollowed out given the absence of robust funding. These schools may engage in shallow renditions of participatory action research by faculty in lieu of meaningful, ambitious research efforts.
Third, most educational-design questions investigated by laboratory schools are investigated at the classroom or curriculum (rather than school design) level. This creates tension between those seeking to test innovative practices (e.g., a lesson plan that involves an extended project) and the constraints of traditional classrooms.
Finally, insofar as bona fide research does happen, it is constrained by what is funded, publishable, and tenurable within traditional graduate schools of education. Hence most research reflects the concerns of existing schools instead of seeking to reimagine school design and educational practice.
2. Why will National Laboratory Schools succeed where past efforts at educational reform (e.g., charter schools) have failed?
Most past educational-reform initiatives have focused on either supporting and improving existing schools (e.g., through improved curricula for standard classes), or on subsidizing and supporting new schools (e.g., charter schools) that represent only minor departures from traditional models.
The National Laboratory Schools program will provide a new research, design, and development infrastructure for inventing new school models, curricula, and educator training. These schools will have resources, in-house expertise, and research priorities that traditional public schools—whether district or charter or pilot—do not and should not. If the National Laboratory Schools are successful, their output will help inform educational practice across the U.S. school ecosystem.
3. Don’t charter schools and pilot schools already support experimentation? Wasn’t that the original idea for charter and pilot schools—that they’d be a laboratory to funnel innovation back into public schools?
Yes, but this transfer hasn’t happened for at least two reasons. First, the vast majority of charter and pilot schools are not pursuing fundamentally new models because doing so is too costly and risky. Charter schools can often perform more effectively than traditional public schools, but this is just as often because of problematic selection bias in enrollment as it is because the autonomy they’re given allows for more effective leadership and organizational management. Second, the politics around charter and pilots has become increasingly toxic in many places, which prevents new ideas from being considered by public schools or advocated for effectively by public leaders.
4. Why do we need invention at the school rather than at the classroom level? Wouldn’t it be better to figure out how to improve schools that exist rather than end up with some unworkable model that most districts can’t adopt?
The solutions we need might not exist at the classroom level. We invest a great deal of time, money, and effort into improving existing schools. But we underinvest in inventing fundamentally different schools. There are many design choices which we need to explore which cannot be adequately developed through marginal improvements to existing models. One example is project-based learning, wherein students undertake significant, often multidisciplinary projects to develop their skills. Project-based learning at any serious level requires significant blocks of time that don’t fit in traditional school schedules and calendars. A second example is the role of computational thinking, as centered in this proposal. Meaningfully incorporating computational approaches into a school design requires new pedagogies, developing novel tools and curricula, and re-training staff. Vanishingly few organizations do this kind of work as a result.
If and when National Laboratory Schools develop substantially innovative models that demonstrate significant value, there will surely need to be a translation process to enable districts to adopt these innovations, much as translational medicine brings biomedical innovations from the lab to the hospital. That process will likely need to involve helping districts start and grow new schools gradually, rather then district-wide overhauls.
5. What kinds of “traditional assumptions” need to be revisited at the school level?
The basic model of school assumes subject-based classes with traditionally licensed teachers lecturing in each class for 40–90 minutes a day. Students do homework, take quizzes and tests, and occasionally do labs or projects. The courses taught are largely fixed, with some flexibility around the edges (e.g., through electives and during students’ junior and senior high-school years).
Traditional school represents a compromise among curriculum developers, standardized-testing outfits, teacher-licensure programs, regulations, local stakeholder politics, and teachers’ unions. Attempts to change traditional schools almost always fail because of pressures from one or more of these groups. The only way to achieve meaningful educational reform is to demonstrate success in a school environment rethought from the ground up. Consider a typical course sequence of Algebra I, Geometry, Algebra II, and Calculus. There are both pedagogical and vocational reasons to rethink this sequence and instead center types of mathematics that are more useful in computational contexts (like discrete mathematics and linear algebra). But a typical school will not be able to simultaneously develop the new tools, materials, and teachers needed to do so.
6. Has anything like the National Laboratory School program been tried before?
No. There have been various attempts to promote research in education without starting new schools. There have been interesting attempts by states to start new schools (like Governor’s Schools),there have been some ambitious charter schools, and there have been attempts to create STEM-focused and computationally focused magnet schools. But there has never been a concerted attempt in the United States to establish a new kind of research infrastructure built atop the foundation of functioning schools as educational “sandboxes”.
1. How will we pay for all this? What existing funding streams will support this work? Where will the rest of the money for this program come from?
For budgeting purposes, assume that each Laboratory School enrolls a small group of forty high school or community college students full-time at an average per capita rate of $40,000 per person per year. Half of that budget will support the functioning of schools themselves. The remaining half will support a small research and development team responsible for curating and developing the computational tools, materials, and curricula needed to support the School’s educators. This would put the direct service budget of the school solidly at the 80th percentile of current per capita spending on K–12 education in the United States.With these assumptions, running 100 National Laboratory Schools would cost ~$160 million. Investing $25 million per year would be sufficient to establish an initial 15 sites. This initial federal funding should be awarded through a 1:1 matching competitive-grant program funded by (i) the 10% of American Competitiveness and Workforce Improvement Act (ACWIA) Fees associated with H1-B visas (which the NSF is statutorily required to devote to public-private partnerships advancing STEM education), and (ii) the NSF TIP Directorate’s budget, alongside budgets from partner agency programs (for instance, the Department of Education’s Education Innovation and Research and Investing in Innovation programs). For many states, these funds should also be layered atop their existing Elementary and Secondary School Emergency Relief (ESSER) and American Rescue Plan (ARP) awards.
2. Why is vertical integration important? Do we really need to run schools to figure things out?
Vertical integration (of research, design, and operation of a school) is essential because schools and teacher education programs cannot be redesigned incrementally. Even when compelling curricular alternatives have been developed under the auspices of an organization like the NSF, practical challenges in bringing those innovations to practice have proven insurmountable. In healthcare, the entire field of translational medicine exists to help translate research into practice. Education has no equivalent.
The vertically integrated National Laboratory School system will address this gap by allowing experimenters to control all relevant aspects of the learning environment, curricula, staffing, schedules, evaluation mechanisms, and so on. This means the Laboratory Schools can demonstrate a fundamentally different approach, learning from great research labs like Xerox PARC and the Mayo Clinic, much of whose success depended on tightly-knit, cross-disciplinary teams working closely together in an integrated environment.
3. What would the responsibilities of a participating agency look like in a typical National Laboratory School partnership?
A participating agency will have some sort of educational or workforce-development initiative that would benefit from the addition of a National Laboratory School as a component. This agency would minimally be responsible for:
- Vetting or establishing an independent, state-level organization to receive federal funding and act as the primary liaison to the DSI.
- Giving the organization the matching funds needed to access DSI funding.
- Ensuring that the organization maintains a board that includes at least one community-college leader, two youth workers or high-school leaders, one representative from the state department of education, and two computation domain experts (one from industry and one from academia). Board size should typically not exceed eight members.
- Providing DSI with the necessary access and support to ensure that appropriate and sufficient data are collected for evaluation and learning purposes.
- Partnering with philanthropic actors to fund competitive grant programs that ultimately incentivize district and charter schools to adopt and adapt successful curricula and models developed by Laboratory Schools.
4. How should success for individual Laboratory Schools be defined?
Working with the Institute of Education Sciences (IES)’ National Center for Education Research(NCER), the DSI should develop frameworks for collecting necessary qualitative and quantitative data to document, understand, and evaluate the design of any given Laboratory School. Evaluation would include evaluation of compliance with financial and legal parity requirements as well as evaluation of student growth and work products.
Evaluation processes should include:
- Comprehensive process and product documentation of teachers’ and students’ work.
- Mappings of that work back onto traditional academic standards (e.g., Common Core Mathematics and English Language Arts standards).
- Financial audits of a School’s operation to evaluate resource-allocation decisions.
- Enrollment and retention audits that document the socioeconomic, demographic, and academic composition of School cohorts.
- An annual report produced by the School describing its model and the primary questions and challenges confronted that year.
- A common (i.e., across the National Laboratory School system) series of diagnostic/formative assessments to evaluate student numeracy and literacy. These assessments should be designed and administered by the DSI and its partners. Assessments should not be overly burdensome or time-consuming for School staff or students, so that the assessment process does not detract from the innovation and experimentation missions of the National Laboratory School system.
Success should be judged by a panel of experts that includes domain experts, youthworkers and/or school leaders, and DSI leadership. Dimensions of performance these panels should address should minimally include depth and quality of students’ work, degree of traditional academic coverage, ambition and coherence of the research agenda (and progress on that research agenda), retention of an equitably composed student cohort, and growth (not absolute performance) on the diagnostic/formative assessments.In designing evaluation mechanisms, it will be essential to learn from failed accountability systems in public schools. Specifically:, it will be essential to avoid pushing National Laboratory Schools to optimize for the particular metrics and measurements used in the evaluation process. This means that the evaluation process should be largely based on holistic evaluations made by expert panels rather than fixed rubrics or similar inflexible mechanisms. Evaluation timescales should also be selected appropriately: e.g., performance on diagnostic/formative assessments should be measured by examining trends over several years rather than year-to-year changes.
5. What makes the Small Business Innovation Research (SBIR) program a good model for the National Laboratory School program?
The SBIR program is a competitive grant competition wherein small businesses submit proposals to a multiphase grant program. SBIR awards smaller grants (~$150,000) to businesses at early stages of development, and makes larger grants (~$1 million) available to awardees who achieve certain progress milestones. SBIR and similar federal tiered-grant programs (e.g., the Small Business Technology Transfer, or STTR, program) have proven remarkably productive and cost-effective, with many studies highlighting that they are as or more efficient on a per-dollar basis when compared to the private sector via common measures of innovation like number of patents, papers, and so on.
The SBIR program is a good model for the National Laboratory School program; it is an example of the federal government promoting innovation by patching a hole in the funding landscape. Traditional financing options for businesses are often limited to debt or equity, and most providers of debt (like retail banks) for small businesses are rarely able or incentivized to subsidize research and development. Venture capitalists typically only subsidize research and development for businesses and technologies with reasonable expectations of delivering 10x or greater returns. SBIR provides funding for the innumerable businesses that need research and development support in order to become viable, but aren’t likely to deliver venture-scale returns.
In education, the funding landscape for research and development is even worse. There are virtually no sources of capital that support people to start schools, in part because the political climate around new schools can be so fraught. The funding that does exist for this purpose tends to demand school launch within 12–18 months: a timescale upon which it is not feasible to design, evaluate, refine an entirely new school model. Education is a slow, expensive public good: one that the federal government shouldn’t provision, but should certainly subsidize. That includes subsidizing the research and development needed to make education better.
States and local school districts lack the resources and incentives to fund such deep educational research. That is why the federal government should step in. By running a tiered educational research-grant program, the federal government will establish a clear pathway for prototyping and launching ambitious and innovative schools.
6. What protections will be in place for students enrolled in Laboratory Schools?
The state organizations established or selected to oversee Laboratory Schools will be responsible for approving proposed educational practices. That said, unlike in STEM fields, there is no “lab bench” for educational research: the only way we can advance the field as a whole is by carefully prototyping informed innovations with real students in real classrooms.
7. Considering the challenges and relatively low uptake of educational practices documented in the What Works Clearinghouse, how do we know that practices proven in National Laboratory Schools will become widely adopted?
National Laboratory Schools will yield at least three kinds of outputs, each of which is associated with different opportunities and challenges with respect to widespread adoption.
The first output is people. Faculty trained at National Laboratory Schools (and at possible educator-development programs run within the Schools) will be well positioned to take the practices and perspectives of National Laboratory Schools elsewhere (e.g., as school founders or department heads). The DSI should consider establishing programs to incentivize and support alumni personnel of National Laboratory Schools in disseminating their knowledge broadly, especially by founding schools.
The second output is tools and materials. New educational models that are responsive to the computational revolution will inevitably require new tools and materials—including subject-specific curricula, cross-disciplinary software tools for analysis and visualization, and organizational and administrative tools—to implement in practice. Many of these tools and materials will likely be adaptations and extensions of existing tools and materials to the needs of education.
The final output is new educational practices and models. This will be the hardest, but probably most important, output to disseminate broadly. The history of education reform is littered with failed attempts to scale or replicate new educational models. An educational model is best understood as the operating habits of a highly functioning school. Institutionalizing those habits is largely about developing the skills and culture of a school’s staff (especially its leadership). This is best tackled not as a problem of organizational transformation (e.g., attempting to retrofit existing schools), but rather one of organizational creation—that is, it is better to use models as inspirations to emulate as new schools (and new programs within schools) are planned. Over time, such new and inspired schools and programs will supplant older models.
8. How could the National Laboratory School program fail?
Examples of potential pitfalls that the DSI must strive to avoid include:
- Spreading resources too thinly. It is popular to claim to support innovation. Hence there are strong incentives for stakeholders to launch innovation initiatives but then hollow them out (in budgetary or other terms) by spreading resources too thinly across many different efforts.
- Premature scaling. It will be tempting for stakeholders to prioritize rapidly scaling efforts, proposing, for instance, to incubate a Laboratory School for only one to three years before “scaling up” the school to hundreds or thousands of students. But rapidly scaling up high-touch educational models (i.e. those involving significant teacher-student interactions and relationships) has never worked. Instead of emphasizing scale-up, the National Laboratory School program should emphasize building human capital (e.g., using schools to train teachers and leaders), developing tools and materials (e.g., curricula), and creating long-term partnerships to incubate and translate Laboratory Schools’ insights into practice.
- Incrementalism. It is risky to try to invent something new. It is much safer and more comfortable to merely improve or support things that exist. The National Laboratory School program must resist the temptation to develop “Band-Aid” solutions to fundamental problems with existing school models instead of inventing new models that might obviate those issues entirely.
- Impatience. Similarly, research and invention are fundamentally unpredictable, creative processes that often take many years of investment to bear fruit. If we knew what we needed to do in education, we wouldn’t need to invest in research in the first place. The funding landscape for research and reform in education is heavily weighted toward efforts carried out on a one to five year timeframe. The National Laboratory School program must remain committed to ambitious, longer-term initiatives.
- Neglecting the importance of team. When undertaking risky work, it can be tempting to only support all-star teams of established experts thought to have the greatest chance of success. But while expertise matters when it comes to developing entirely new school models, it is equally if not more important to actively bring in fresh voices and fresh perspectives. Furthermore, National Laboratory Schools are not theoretical, but practical. Their success will depend on committed, creative teams working closely together to create an excellent, operating learning environment. Building teams characterized by drive, creativity, and persistence will play an outsized role in the likelihood of success of any given Laboratory School.
- Neglecting the importance of domain expertise. In education, there is a tendency to treat teaching and learning as skills and activities that happen in a general, domain-agnostic way. In reality, one cannot think about thinking and learning without thinking about thinking and learning something. Innovations in tools, materials, human capital, and pedagogy almost certainly need to be domain-specific. Given the focus of the National Laboratory School program on the computational revolution and on integrating new computational approaches into traditional disciplines, it will be important to ensure that every level of the National Laboratory School program includes computational domain experts as well as experts in education and educational theory.