Not Accessible: Federal Policies Unnecessarily Complicate Funding to Support Differently Abled Researchers. We Can Change That.
Persons with disabilities (PWDs) are considered the largest minority in the nation and in the world. There are existing policies and procedures from agencies, directorates, or funding programs that provide support for Accessibility and Accommodations (A&A) in federally funded research efforts. Unfortunately, these policies and procedures all have different requirements, processes, deadlines, and restrictions. This lack of standardization can make it difficult to acquire the necessary support for PWDs by placing the onus on them or their Principal Investigators (PIs) to navigate complex and unique application processes for the same types of support.
This memo proposes the development of a standardized, streamlined, rolling, post-award support mechanism to provide access and accommodations for PWDs as they conduct research and disseminate their work through conferences and convenings. The best case scenario is one wherein a PI or their institution can simply submit the identifying information for the award that has been made and then make a direct request for the support needed for a given PWD to work on the project. In a multi-year award such a request should be possible at any time within the award period.
This could be implemented by a single, streamlined policy adopted by all agencies with the process handled internally. Or, by a new process across agencies under Office of Science and Technology Policy (OSTP) or Office of Management and Budget (OMB) that handles requests for accessibility and accommodations at federally funded research sites and at federally funded convenings. An alternative to a single streamlined policy across these agencies might be a new section in the uniform guidance for federal funding agencies, also known as 2 CFR 200.
This memo focuses on Federal Open Science funding programs to illustrate the challenges in getting A&A funding requests supported. The authors have taken an informal look at agencies outside of science and technology funding. We found similar challenges across federal grantmaking in the Arts and Humanities, Social Services, and Foreign Relations and Aid entities. Similar issues likely exist in private philanthropy as well.
Challenge and Opportunity
Deaf/hard-of-hearing (DHH), Blind/low-vision (BLV), and other differently abled academicians, senior personnel, students, and post-doctoral fellows engaged in federally funded research face challenges in acquiring accommodations for accessibility. These include, but are not limited to:
- Human-provided ASL-English interpreting and interview transcription services for the DHH and non-DHH participants. While there are some applications of artificial intelligence (AI) that show promise on the transcription side, there’s a long way to go on ASL interpretation in an AI provided model versus the use of human interpreters.
- Visual and Pro-tactile interpreting/descriptive services for the BLV participants
- Adaptive lab equipment and computing peripherals
- Accessibility support or remediation for physical sites
Having these services available is crucial for promoting an inclusive research environment on a larger scale.
Moving to a common, post-award process:
- Allows the PI and the reviewers more time and space to focus on the core research efforts being described in the initial proposal
- Removes any chance of the proposal funding being taken out of consideration due to higher costs in comparison to similar proposals in the pool
- Creates a standard, replicable pathway for seeking accommodations once the overall proposal has been funded. This is especially true if the support comes from a single process across all federal funding programs rather than within each agency.
- Allows for flexibility in accommodations. Needs vary from person-to-person and case-to-case. For example, in the case of workplace accommodations for DHH team members, one full-time researcher may request full-time ASL interpretation on-site, while another might prefer to work primarily through digital text channels; only requiring ASL interpretation for staff meetings and other group activities.
- Potentially reduces federal government financial and human resources currently expended in supporting such requests by eliminating duplication of effort across agencies or, at minimum streamlining processes within agencies.
Such a process might follow these steps below. The example below is from the National Science Foundation (NSF), but the same, or similar process could be done within any agency:
- PI receives notification of grant award from NSF. PI identifies need for A & A services at start, or at any time during the grant period
- PI (or SRS staff) submits request for A&A funding support to NSF. Request includes NSF program name and award number, the specifics of the requested A & A support, a budget justification and three vendor quotes (if needed)
- Use of funds is authorized, and funding is released to PI’s institution and acquisition would follow their standard purchasing or contracting procedures
- PI submits receipts/ paid vendor invoice to funding body
- PI cites and documents use of funds in annual report, or equivalent, to NSF
Current Policies and Practices
Pre-Award Funding
Principal Investigators (PIs) who request A&A support for themselves or for other members of the research team are sometimes required to apply for it in their initial grant proposals. This approach has several flaws.
First and foremost, this funding process reduces the direct application of research dollars for these PIs and their teams compared to other researchers in the same program. Simply put, if two applicants are applying for a $100,000 grant, and one needs to fund $10,000 worth of accommodations, services, and equipment out of the award, they have $10,000 less to pursue the proposed research activities. This essentially creates a “10% A & A tax” on the overall research funding request.
Lived Experience Example
In a real world example, the author and his colleague, the late Dr. Mel Chua, were awarded a $60,000, one year grant to do a qualitative research case study as part of the Ford Foundation Critical Digital Infrastructure Research cohort. As Dr. Chua was Deaf, the PIs pointed out to Ford that $10,000 worth of support services would be needed to cover costs for
- American Sign Language (ASL) interpreters during the qualitative interviews and advisory committee meetings
- Transcription of the interviews
- ASL Interpreting for conference dissemination and collection of comments at formal and informal meetings during those conferences
We communicated the fact that spending general research award money on those services would reduce the research work the funds were awarded to support. The Ford Foundation understood and provided an additional $10,000 as post-award funding to cover those services. Ford did not inform the PIs as to whether that support came from another directed set of funds for A&A support or from discretionary dollars within the foundation.
Second, it can be limiting for the funded project to work with or hire PWDs as co-PIs, students, or if they weren’t already part of the original grant proposal. For example, suppose a research project is initially awarded funding for four years without A&A support and then a promising team member who is a PWD appears on the scene in year three who would require it. In this case, PIs then must:
- Reallocate research dollars meant for other uses within the grant to support A&A;
- Find other funding to support those needs within their institution;
- Navigate the varied post-award support landscape, sometimes going so far as to write an entirely new full proposal with a significant review timeline, to try to get support. If this happens off cycle, the funding might not even arrive until the last few months of the fourth year.
- Or, not hire the person in question because they can’t provide the needed A&A.
Post-Award Funding
Some agencies have programs for post-award supplemental funding that address the challenges described above. While these are well-intentioned, many are complicated and often have different timelines, requirements, etc. In some cases, a single supplemental funding source may be addressing all aspects of diversity, equity and inclusion as well as A&A. The needs and costs in the first three categories are significantly different than in the last. Some post-award pools come from the same agency’s annual allocation program-wide. If those funds have been primarily expended on the initial awards for the solicitation, there may be little, or no money left to support post-award funding for needed accommodations. The table below briefly illustrates the range of variability across a subset of representative supplemental funding programs. There are links in the top row of the table to access the complete program information. Beyond the programs in this table, more extensive lists of NSF and NIH offerings are provided by those agencies. One example is the NSF Dear Colleague Letter Persons with Disabilities – STEM Engagement and Access.
Ideally these policies and procedures, and others like them, would be replaced by a common, post-award process. PIs or their institutions would simply submit the identifying information on the grant that had been awarded and the needs for Accommodations and Accessibility to support team members with disabilities at any time during the grant period.
Plan of Action
The OSTP, possibly in a National Science and Technology Council interworking group process,, should conduct an internal review of the A&A policies and procedures for grant programs from federal scientific research aligned agencies. This could be led by OSTP directly or under their auspices and led by either NSF or the National Institute of Health (NIH). Participants would be relevant personnel from DOE, DOD, NASA, USDA, EPA, NOAA, NIST and HHS, at minimum. The goal should be to create a draft of a single, streamlined policy and process, post-award, for all federal grant programs or a new section in the uniform guidance for federal funding agencies.
There should be an analysis of the percentages, size and amounts of awards currently being made to support A&A in research funding grant programs. It’s not clear how the various funding ranges and caps listed in the table above were determined or if they meet the needs. One goal of this analysis would be to determine how well current needs within and across agencies are being met and what future needs might be.
A second goal would be to look at the level of duplication of effort and scope of manpower savings that might be attained by moving to a single, streamlined policy. This might be a coordinated process between OMB and OSTP or a separate one done by OMB. No matter how it is coordinated, an understanding of these issues should inform whatever new policies or new additions to 2 CFR 200 would emerge.
A third goal of this evaluation could be to consider if the support for A&A post-award funding might best be served by a single entity across all federal grants, consolidating the personnel expertise and policy and process recommendations in one place. It would be a significant change, and could require an act of Congress to achieve, but from the point of view of the authors it might be the most efficient way to serve grantees who are PWDs.
Once the initial reviews as described above, or a similar process is completed, the next step should be a convening of stakeholders outside of the federal government with the purpose of providing input to the streamlined draft policy. These stakeholder entities could include, but should not be limited to, the National Association for the Deaf, The American Foundation for the Blind, The American Association of People with Disabilities and the American Diabetes Association. One of the goals of that convening should be a discussion, and decision, as to whether a period of public comment should be put in place as well, before the new policy is adopted.
Conclusion
The above plan of action should be pursued so that more PWDS will be able to participate, or have their participation improved, in federally funded research. A policy like the one described above lays the groundwork and provides a more level playing field for Open Science to become more accessible and accommodating.It also opens the door for streamlined processes, reduced duplication of effort and greater efficiency within the engine of Federal Science support.
Acknowledgments
The roots of this effort began when the author and Dr. Mel Chua and Stephen Jacobs received funding for their research as part of the first Critical Digital Infrastructure research cohort and were able to negotiate for accessibility support services outside their award. Those who provided input on the position paper this was based on are:
- Dr. Mel Chua, Independent Researcher
- Dr. Liz Hare, Quantitative Geneticist, Dog Genetics LLC
- Dr. Christopher Kurz, Professor and Director of Mathematics and Science Language and Learning Lab, National Technical Institute for the Deaf
- Luticha Andre-Doucette, Catalyst Consulting
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
Based on the percentage of PWDs in the general population size, conference funders should assume that some of their presenters or attendees will need accommodations. Funding from federal agencies should be made available to provide an initial minimum-level of support for necessary A & A. The event organizers should be able to apply for additional support above the minimum level if needed, provided participant requests are made within a stated time before the event. For example, a stipulated deadline of six weeks before the event to request supplemental accommodation, so that the organizers can acquire what’s needed within thirty days of the event.
Yes, in several ways. In general, most of the support needed for these is in service provision vs. hardware/software procurement. However, understanding the breadth and depth of issues surrounding human services support is more complex and outside the experience of most PIs running a conference in their own scientific discipline.
Again, using the example of DHH researchers who are attending a conference. A conference might default to providing a team of two interpreters during the conference sessions, as two per hour is the standard used. Should a group of DHH researchers attend the conference and wish to go to different sessions or meetings during the same convening, the organizers may not have provided enough interpreters to support those opportunities.
By providing interpretation for formal sessions only, DHH attendees are excluded from a key piece of these events, conversations outside of scheduled sessions. This applies to both formally planned and spontaneous ones. They might occur before, during, or after official sessions, during a meal offsite, etc. Ideally interpreters would be provided for these as well.
These issues, and others related to other groups of PWDs, are beyond the experience of most PIs who have received event funding.
There are some federal agency guides produced for addressing interpreting and other concerns, such as the “Guide to Developing a Language Access Plan” Center for Medicare and Medicaid Services (CMS). These are often written to address meeting needs of full-time employees on site in office settings. These generally cover various cases not needed by a conference convener and may not address what they need for their specific use case. It might be that the average conference chair and their logistics committee is a simply stated set of guidelines to address their short-term needs for their event. Additionally, a directory of where to hire providers with the appropriate skill sets and domain knowledge to meet the needs of PWDs attending their events would be an incredible aid to all concerned.
The policy review process outlined above should include research to determine a base level of A & A support for conferences. They might recommend a preferred federal guide to these resources or identify an existing one.
Using Title 1 to Unlock Equity-Focused Innovation for Students
Congress should approve a new allowable use of Title I spending that specifically enables and encourages school districts to use funds for activities that support and drive equity-focused innovation. The persistent equity gap between wealthy and poor students in our country, and the continuing challenges caused by the pandemic, demand new, more effective strategies to help the students who are most underserved by our public education system.
Efforts focused on the distribution of all education funding, and Title I in particular, have focused on ensuring that funds flow to students and districts with the highest need. Given the persistence of achievement and opportunity gaps across race, class, and socioeconomic status, there is still work to be done on this front. Further, rapidly developing technologies such as artificial intelligence and immersive technologies are opening up new possibilities for students and teachers. However, these solutions are not enough. Realizing the full potential of funding streams and emerging technologies to transform student outcomes requires new solutions designed alongside the communities they are intended to serve.
To finally close the equity gap, districts must invest in developing, evaluating, and implementing new solutions to meet the needs of students and families today and in a rapidly changing future. Using Title I funding to create a continuous, improvement-oriented research and development (R&D) infrastructure supporting innovations at scale will generate the systemic changes needed to reach the students in highest need of new, creative, and more effective solutions to support their learning.
Challenge and Opportunity
Billions of dollars of federal funding have been distributed to school districts since the authorization of Title I federal funding under the Elementary and Secondary Education Act (ESEA), introduced in 1965 (later reauthorized under the Every Student Succeeds Act [ESSA]). In 2023 alone, Congress approved $18.4 billion in Title I funding. This funding is designed to provide targeted resources to school districts to ensure that students from low-income families can meet rigorous academic standards and have access to post-secondary opportunities. ESEA was authorized during the height of the Civil Rights Movement with the intent of addressing the two primary goals of (1) ensuring traditionally disadvantaged students were better served in an effort to create more equitable public education, and (2) addressing the funding disparities created by differences in local property taxes, the predominant source of education funding in most districts. These dual purposes were ultimately aimed at ensuring that a student’s zip code did not define their destiny.
The passing of ESEA was a watershed moment. Prior to its authorization, education policy was left mostly up to states and localities. In authorizing ESEA, the federal government launched ongoing involvement in public education and initiated a focus on principles of equity in education.
Further, research shows that school spending matters: Increased funding has been found to be associated with higher levels of student achievement. However, despite the increased spending for students from low-income families via Title I, the literature on outcomes of Title 1 funding is mixed. The limited impact of Title I funds on outcomes may be a result of municipalities using Title I funding to supplant or fill gaps in their overall funding and programs, instead of being used as an additive funding stream meant to equalize funding between poorer and richer districts. Additionally, while a taxonomy of options is provided to bring rigor and research to how districts use Title funding, the narrow set of options has not yielded the intended outcomes at scale. For instance, studies have repeatedly shown that school turnaround efforts have proven particularly stubborn and not shown the hoped-for outcomes.
The equity gap that ESEA was created to address has not been erased. There is still a persistent achievement gap between high- and low-income students in the nation. The emergence of COVID in 2020 uprooted the public education system, and its impact on student learning, as measured by test scores, is profound. Students lost ground across all focus areas and grades. Now, in the post-pandemic era, students have continued to lose ground. The “COVID Generation” of students are behind where they should be, and many are disengaged or questioning the value of their public education. Chronic absenteeism is increasing across all grades, races, and incomes. These challenges create an imperative for schools and districts to deepen their understanding of the interests and needs of students and families. The quick technological advancements in the education market are changing what is possible and available to students, while also raising important questions around ethics, student agency, and equitable access to technology. It is a moment of immense potential in public education.
Title I funds are a key mechanism to addressing the array of challenges in education ranging from equity to fast-paced advancements in technology transforming the field. In its current form, Title I allocation occurs via four distribution criteria. The majority of funding is allocated via basic grants that are determined entirely on individual student income eligibility. The other three criteria allocate funding based on the concentration of student financial need within a district. Those looking to rethink allocation often argue for considering impact per dollar allocated, beyond solely need as a qualifying indicator for funding, essentially taking into account cost of living and services in an area to understand how far additional funding will stretch in order to more accurately equalize funding. It is essential that Title I is redesigned beyond redoing the distribution formula. The money allocated must be spent differently—more creatively, innovatively, and wisely—in order to ensure that the needs of the most vulnerable students are finally met.
Plan of Action
Title I needs a new allowable spending category approved that specifically enables and encourages districts to use funds for activities that drive equity-focused innovation. Making room for innovation grounded in equity is particularly important in this present moment. Equity has always been important, but there are now tools to better understand and implement systems to address it. As school districts continue to recover from the pandemic-related disruptions, explore new edtech learning options, and prepare for an increasingly diverse population of students for the future, they must be encouraged to drive the creation of better solutions for students via adding a spending category that indicates the value the federal government sees in innovating for equity. Some of the spending options highlighted below are feasible under the current Title I language. By encouraging these options tethered specifically to innovation, district leadership will feel more flexibility to spend on programs that can foster equity-driven innovation and create space for the new solutions that are needed to improve outcomes for students.
Innovation, in this context, is any systemic change that brings new services, tools, or ways of working into school districts that improve the learning opportunities and experience for students. Equity-focused innovation refers to innovation efforts that are specifically focused on improving equity within school systems. It is a solution-finding process to meet the needs of students and families. Innovation can be new, technology-driven tools for students, teachers, or others who support student learning. But innovation is not limited to technology. Allowing Title I funding to be used for activities that support and foster equity-driven innovation could also include:
- Improving data systems and usage: Ensure that school districts have agile data systems equipped to identify student weaknesses and determine the effectiveness of solutions. As more solutions come to market and are developed internally, both AI and otherwise, school systems will be able to better serve students qualifying for Title I funding if they can meaningfully assess what is and is not working and use that information to guide strategy and decision-making.
- Leadership development: Support the research and development, futurist, and equitable design skills of systems to enable leaders to guide innovation from within districts alongside students and families.
- Testing new solutions: Title I funding currently can be spent primarily on evidence-based programs; enabling the use of funding for innovative pilots that have community support would provide space to discover more effective solutions.
- Incentivizing systemic district innovation: School districts could use funding to support the creation of innovation offices within their administration structure that are tasked with developing an innovation agenda rooted in district and student needs and spearheading solutions.
- Building networks for change: District leaders charged with creating and sustaining new learning models, school models, and programs often do so in isolation. Allowing districts to fund the creation of new programs and support existing organizations that bring together school system innovators and researchers to capture and share best practices, promising new solutions, and lessons learned from testing can lead to better adoption and scale of promising new models. There are already networks that exist, for instance, the Regional Education Laboratory Program. Funding could be used to support these existing networks or to develop new networks specifically tailored to meet the needs of leaders driving these innovations.
Expanding Title I funding to make room for innovative ideas and solutions within school systems has the potential to unlock new, more effective solutions that will help close equity gaps, but spending available education funds on unproven ideas can be risky. It is essential that the Department of Education issues carefully constructed guardrails to allow ample space for new solutions to emerge and scale, while also protecting students and ensuring their educational needs are still met. These guardrails and design principles would ensure that funds are spent in impactful ways that support innovation and building an evidence base. Examples of guardrails for a school system spending Title I funding on innovation could include:
- Innovation agenda: There should be a clearly articulated, publicly available innovation agenda that lays out how needs are being identified using quantitative and qualitative data and research, the methods of how innovations are being developed and selected, the goals of the innovation and how the work will grow (or not) based on clearly defined metrics of success.
- Clear research & development process: New ideas, tools, and ways of working must come into the district with a clear R&D process that begins with student and community needs and then regularly interrogates what is and is not working, tries to understand the why behind what is working, and expands promising practices.
- Pilot size limits: Unproven and innovative ideas should begin as pilots in order to ensure they are tested, evaluated, and proven before being used more broadly.
- Timeline requirements for results: New innovation funded via Title I funding should have a limited timeline during which the system needs to show improvement and evidence of impact.
- Clear outcomes that the innovation is aiming for: Innovation is not about something new for the sake of something new. Innovation funding via Title I funding must be linked to specific outcomes that will help achieve the overarching programmatic goal of increasing educational equity in our country.
While creating an authorized funding category for equity-focused innovation through Title I would have the most widespread impact, other ways to drive equitable innovation should also be pursued in the short term, such as through the new Comprehensive Center (CC), set to open in fall 2024, that will focus on equitable funding. It should prioritize developing the skills in district leaders to enable and drive equity-driven innovation.
Conclusion
Investment in innovation through Title I funding can feel high risk compared to the more comfortable route of spending only on proven solutions. However, many ways of traditional spending are not currently working at scale. Investing in innovation creates the space to find solutions that actually work for students—especially those that are farthest from opportunity and whom Title I funding is intended to support. Despite the perceived risk, investing in innovation is not a high-risk path when coupled with a clear sense of the community need, guardrails to promote responsible R&D and piloting processes, predetermined outcome goals, and the data systems to support transparency on progress. Large-scale, federal investment in creating space for innovation through Title I funding in—an already well-known mode of district funding not currently realizing its desired impact—will create solutions within public education that give students the opportunities they need and deserve.
This action-ready policy memo is part of Day One 2025 — our effort to bring forward bold policy ideas, grounded in science and evidence, that can tackle the country’s biggest challenges and bring us closer to the prosperous, equitable and safe future that we all hope for whoever takes office in 2025 and beyond.
This memo was developed in partnership with the Alliance for Learning Innovation, a coalition dedicated to advocating for building a better research and development infrastructure in education for the benefit of all students. Read more education R&D memos developed in partnership with ALI here.
How A Defunct Policy Is Still Impacting 11 Million People 90 Years Later
Have you ever noticed a lack of tree cover in certain areas of a city? Have you ever visited a city and been advised to avoid certain districts or communities? Perhaps you even recall these visual shifts occurring immediately after crossing a particular road or highway?
If so, what you experienced was likely by design:
In the early 20th century, Black communities across the U.S. were subjected to economic constraint and social isolation through housing policies that mandated segregation. Black communities were systematically excluded from the housing benefits offered by President Franklin D. Roosevelt’s New Deal and Homeowners’ Loan Corporation (HOLC). The HOLC served as the basis of the National Housing Act of 1934, which ratified the Federal Housing Authority (FHA).
Housing policy discrimination was further exacerbated by the FHA refusing to insure mortgages near and within Black neighborhoods. The HOLC provided lenders with maps that circled areas with sizeable black populations with red markers—a practice now referred to as redlining. While the systematic practice of redlining ended in 1968 under The Fair Housing Act of 1968, redlining continues to economically impair over 11 million Americans—and less than half are Black.
You are probably thinking (1) how is this possible? (2) How could a defunct 20th-century policy designed to discriminate against Black communities still impact over 11 million—mostly non-Black—Americans today? The answer is the same for both questions: place-based discrimination.
Policies such as redlining are designed to worsen the material conditions of a target group by preventing investment in the places where they live. Over time, this results in physical locations that are systemically denied access to features such as loans, enterprise, and ecosystem services simply due to their location or place. Place-based discrimination is the principal mechanism of redlining effects, and consequently, costs taxpayers millions of dollars per year.
What is the problem?
Starting in the 1990s, during the Clinton Administration, billions of dollars in tax credits were devoted towards community development and economic growth through the use of special tax credits that attract private investments (Table 1). One of the principal agents from this funding to address place-based discrimination was the creation of Community Development Entities (CDEs). According to the New Markets Tax Credit Coalition, CDEs are private entities that have “demonstrated” an interest in serving or providing capital to low-income- communities (LICs) and individuals (LIIs). Once certified, CDEs are eligible to apply for a special tax credit, New Markets Tax Credit (NMTC), through the Community Development Financial Institution (CDFI) Fund.
However, this program, and others like it, have had a negligible impact on addressing the systemic implications of redlining . A recent Urban Institute report found that inequity in capital flow and investment trends within cities (i.e., Chicago) is driven by residential lending patterns. Highlighting the inequalities that exist between investment among neighborhoods with different racial and income demographics, the analysts surmise that redressing economic downturn involves expanding investments into divested neighborhoods. To date, more than $71 billion have been awarded to CDE’s, and yet, historically-redlined areas remain economically desolate. If these programs are intended to economically revitalize historically-redlined areas, then these programs are not doing what they are supposed to do.
One example of this is the city of Philadelphia:
Philadelphia, a city in the top ten for redlined populations, possesses tens of thousands of vacant buildings and lots that are overlaid by redlining and riddled with brownfield sites. According to the Philadelphia Office of the Controller, historically redlined communities of Philadelphia continue to experience disproportionate amounts of poverty, poor health outcomes, limited educational attainment, unemployment, and violent crime compared to non-redlined areas in the city.
By analyzing HOLC assessment grades (1937) and New Market Tax Credit (NMTC) Program Eligibility (i.e., PolicyMap, projects from 2015-2019) for Philadelphia, PA, I found that of the 30+ Qualified Low-Income Community Investments (QLICIs) in historically-redlined areas, totaling over $400 million in tax credits, none are categorized as Community Development Entities (CDEs).
Meanwhile, the Philadelphia City Council just passed a budget that allocates a record $788 million to the Philadelphia Police Department (PPD). Recent studies show that fatal encounters with police are more likely to occur within historically-redlined areas. It appears the nicest buildings in redlined areas may very well be police stations.
Yet, public investment has been more concerned with maintaining systems of oppression than reversing them. Why continue to invest in systems that do not create wealth? No matter your perception of American policing, the following is clear: policing does not create wealth for distressed communities.
Currently, there are 200+ cities and thousands of communities that are, like Philadelphia, enduring the systemic implications of redlining.
What would happen if public investments were allocated towards restorative policy actions within historically-redlined areas?
A federal program that amalgamates the best elements of community-driven inventiveness into a vehicle for innovative and sustainable economic development. That is, a program that promotes economic revitalization of historically-redlined communities through multipurpose, community-owned enterprises called Innovative Neighborhood Markets (INMs).
What is the policy action?
One thing that urban policy initiatives have made clear, is that distressed communities are prime real-estate targets for private developers . A new federal effort could ensure that investment opportunities are also accessible to community members seeking to launch place-based businesses and enterprises. Businesses and enterprises of this sort will not only reduce urban blight in historically-redlined communities, but also serve as avenues for the direct state, local, and private investment needed to address historical inequities.
The Biden-Harris Administration can combat redlining through a placed-based community investment program, coined Putting Redlines in the Green: Economic Revitalization Through Innovative Neighborhood Markets (PRITG), that affords historically-redlined communities the ability to establish their own profitable enterprise before outside parties (i.e., private developers).
These Innovative Neighborhood Markets (INMs) would be resource hubs that provide affordable grocery items (i.e., fresh produce, meats, dairy, etc.); an outlet for residents of the community to market goods and services (i.e., small businesses); and create cross-sector initiatives that build community enterprise and increase greenspace (i.e., Farm to Neighborhood Model [F2NM], parks, gardens, and tree cover). Most importantly, INM’s are community owned. Through community governance, the community elects and authorizes the types of place-based businesses and enterprises that are present within their INM.
Do you remember the Philadelphia example from earlier?
Under PTRIG, a number of those underutilized structures or vacant spaces are transformed into a vested, profitable, and sustainable community resource. The majority of the financial capital remains within the community, and economic gains are partially earmarked for community revitalization (i.e., soil remediation for brownfield sites, community restoration, and construction of greenspace).
All Taxpayers Benefit
By legally and financially empowering communities with ownership, PRITG will incentivize investment and development that can actually reduce taxpayer liability. For example, the INM can generate the funding to invest in more attractive (and expensive) treecover and landscaping that will reduce the impact of heat islands and imperviousness related to redlining, thereby reducing taxpayer liability by more than $308 million dollars per year. Implementation of PTRIG will decrease taxpayer burden through profit-driven and self-supporting community services.
“Fair and Equal” Access
Another beneficial aspect of this policy involves increasing community access to financial provisions without third-party obstacles (i.e., CDEs and CDFIs). Black and Hispanic home loan applicants are charged higher interest rates than White home loan applicants, resulting in Black and Hispanic borrowers paying $765 million in additional interest per year. Discriminatory practices only succeed in worsening community divestment and increasing the resident displacement which disproportionately impact minority residents. Through the economic-agency provided by PRITG, historically-redlined communities would have heightened protection against lending discrimination, gentrification, and displacement.
Moreover, PTRIG would reinforce the Consumer Financial Protection Bureau (CFPB) and the Office of the Comptroller of the Currency (OCC)’s Combating Redlining Initiative in ensuring that formerly redlined neighborhoods receive “fair and equal access” to the lending opportunities that are—and always have been—available to non-redlined, and majority-White, neighborhoods.
While INMs possess aspects of grocery stores, community banks, business improvement districts (BIDs), and farmers markets, they would differ in one particular area: community wealth.
What is Community Wealth?
As someone who grew up in Champaign, Illinois (Douglas Park), and whose family currently lives in a historically-redlined community (Lansing, MI), it brings me peace to reimagine my community with an INM.
Until my early 20’s, I spent most of my life largely unaware of the importance of community wealth on individual empowerment and its impact on the maintenance of cultural identity. For me, reimagining my community with an INM is not just about correcting the past, it is about enriching the uniqueness of what makes our home, Home.
In general, a community wealth building process needs to address the lack of an asset in a way that builds community sustainability. That is the materialization of a communal epicenter(s) that produces a sense of ownership and pride.
So how would INMs build community wealth? Simple. The community, as a whole, would be defined as the ownership group. Each community member would be legally referenced as a shareholder of this newly acquired, financially-appreciating, community-owned enterprise.
Community Ownership Key to Community Wealth
According to Evan Absher, Chief Executive Officer at Folks Capital, there are currently two broad ways of understanding community ownership.
The first type involves community ownership in the form of trusts or fiduciary arrangements between a community entity and an independent financial establishment. This structure creates a community entity that holds the financial wealth and is subject to some form of community governance. This structure includes entities such as Community Investment Trusts, Community Land Trusts, and Mixed-Income Neighborhood Trust. These structures ensure permanent and lasting control of the land and fidelity to charitable purposes. However, these entities often do not increase actual ownership or produce meaningful wealth at the individual or family levels. Further, they are often nonprofits and can struggle with attracting capital and sustainability.
The second type of community ownership is specifically targeted at individuals and families. These are models that focus on financial agency and ownership of land and property by people within communities. This concept includes models such as employee-ownership, Co-operatives, ROC-USA’s model, and Folks Capital’s Neighborhood Equity Model. These models have an advantage in wealth building and agency for the families involved. The benefit of this second concept of community ownership is that community members have the autonomy to (1) choose to sell their ownership share back to the community fund; (2) receive pro rata (dividend) payments; and/or (3) if the community chooses, sell the enterprise to “would-be gentrifiers.”
Regardless, the community receives more empowerment than was ever offered by previous economic revitalization models (i.e., Opportunity Zones) [See Table 1]. However these models sometimes lack the permanence or control of the other models. If not structured thoughtfully, this lack of control poses a risk of further gentrification.
Regardless of the approach, all models should seek first to center communities and people in the governance and benefits of the model. Institutionalizing models is not the objective. Closing the wealth gap and ending disparities in economic, health, and education outcomes are the ultimate goal.
However, an important question is raised by this policy: who counts as community—especially when talking about the ownership of an individual building?
Are multiple communities expected to be consolidated into one community for the sake of ease? Would that be fair to those communities?
The challenge is making ownership meaningful. Understandably, a resident may possess more pride if their stake in an INM is $1000 opposed to 20 cents.
Thus, communities that are smaller in size may be most benefited by the establishment of an INM. This is not to say that large historically-redlined areas do not stand to gain from INM establishment. Quite the contrary. INMs are designed to not only enfranchise the local communities , but also revitalize the place through restorative, economic, and environmental justice.
Nevertheless, if PTRIG is to provide communities with tools that guarantee full community empowerment, then factors of community ownership should be considered.
Now, one final question remains, and it can only be answered by those within historically-redlined communities: “Who is your community?”
Addressing Online Harassment and Abuse through a Collaborative Digital Hub
Efforts to monitor and combat online harassment have fallen short due to a lack of cooperation and information-sharing across stakeholders, disproportionately hurting women, people of color, and LGBTQ+ individuals. We propose that the White House Task Force to Address Online Harassment and Abuse convene government actors, civil society organizations, and industry representatives to create an Anti-Online Harassment (AOH) Hub to improve and standardize responses to online harassment and to provide evidence-based recommendations to the Task Force. This Hub will include a data-collection mechanism for research and analysis while also connecting survivors with social media companies, law enforcement, legal support, and other necessary resources. This approach will open pathways for survivors to better access the support and recourse they need and also create standardized record-keeping mechanisms that can provide evidence for and enable long-term policy change.
Challenge and Opportunity
The online world is rife with hate and harassment, disproportionately hurting women, people of color, and LGBTQ+ individuals. A research study by Pew indicated that 47% of women were harassed online for their gender compared to 18% of men, while 54% of Black or Hispanic internet users faced race-based harassment online compared to 17% of White users. Seven in 10 LGBTQ+ adults have experienced online harassment, and 51% faced even more severe forms of abuse. Meanwhile, existing measures to combat online harassment continue to fall short, leaving victims with limited means for recourse or protection.
Numerous factors contribute to these shortcomings. Social media companies are opaque, and when survivors turn to platforms for assistance, they are often met with automated responses and few means to appeal or even contact a human representative who could provide more personalized assistance. Many survivors of harassment face threats that escalate from online to real life, leading them to seek help from law enforcement. While most states have laws against cyberbullying, law enforcement agencies are often ill-trained and ill-equipped to navigate the complex web of laws involved and the available processes through which they could provide assistance. And while there are nongovernmental organizations and companies that develop tools and provide services for survivors of online harassment, the onus continues to lie primarily on the survivor to reach out and navigate what is often both an overwhelming and a traumatic landscape of needs. Although resources exist, finding the correct organizations and reaching out can be difficult and time-consuming. Most often, the burden remains on the victims to manage and monitor their own online presence and safety.
On a larger, systemic scale, the lack of available data to quantitatively analyze the scope and extent of online harassment hinders the ability of researchers and interested stakeholders to develop effective, long-term solutions and to hold social media companies accountable. Lack of large-scale, cross-sector and cross-platform data further hinders efforts to map out the exact scale of the issue, as well as provide evidence-based arguments for changes in policy. As the landscape of online abuse is ever changing and evolving, up-to-date information about the lexicons and phrases that are used in attacks also change.
Forming the AOH Hub will improve the collection and monitoring of online harassment while preserving victims’ privacy; this data can also be used to develop future interventions and regulations. In addition, the Hub will streamline the process of receiving aid for those targeted by online harassment.
Plan of Action
Aim of proposal
The White House Task Force to Address Online Harassment and Abuse should form an Anti-Online Harassment Hub to monitor and combat online harassment. This Hub will center around a database that collects and indexes incidents of online harassment and abuse from technology companies’ self-reporting, through connections civil society groups have with survivors of harassment, and from reporting conducted by the general public and by targets of online abuse. Civil society actors that have conducted past work in providing resources and monitoring harassment incidents, ranging from academics to researchers to nonprofits, will run the AOH Hub in consortium as a steering committee. There are two aims for the creation of this hub.
First, the AOH Hub can promote collaboration within and across sectors, forging bonds among government, the technology sector, civil society, and the general public. This collaboration enables the centralization of connections and resources and brings together diverse resources and expertise to address a multifaceted problem.
Second, the Hub will include a data collection mechanism that can be used to create a record for policy and other structural reform. At present, the lack of data limits the ability of external actors to evaluate whether social media companies have worked adequately to combat harmful behavior on their platforms. An external data collection mechanism enables further accountability and can build the record for Congress and the Federal Trade Commission to take action where social media companies fall short. The allocated federal funding will be used to (1) facilitate the initial convening of experts across government departments and nonprofit organizations; (2) provide support for the engineering structure required to launch the Hub and database; (3) support the steering committee of civil society actors that will maintain this service; and (4) create training units for law enforcement officials on supporting survivors of online harassment.
Recommendation 1. Create a committee for governmental departments.
Survivors of online harassment struggle to find recourse, failed by legal technicalities in patchworks of laws across states and untrained law enforcement. The root of the problem is an outdated understanding of the implications and scale of online harassment and a lack of coordination across branches of government on who should handle online harassment and how to properly address such occurrences. A crucial first step is to examine and address these existing gaps. The Task Force should form a long-term committee of members across governmental departments whose work pertains to online harassment. This would include one person from each of the following organizations, nominated by senior staff:
- Department of Homeland Security
- Department of Justice
- Federal Bureau of Investigation
- Department of Health and Human Services
- Office on Violence Against Women
- Federal Trade Commission
This committee will be responsible for outlining fallibilities in the existing system and detailing the kind of information needed to fill those gaps. Then, the committee will outline a framework clearly establishing the recourse options available to harassment victims and the kinds of data collection required to prove a case of harassment. The framework should be completed within the first 6 months after the committee has been convened. After that, the committee will convene twice a year to determine how well the framework is working and, in the long term, implement reforms and updates to current laws and processes to increase the success rates of victims seeking assistance from governmental agencies.
Recommendation 2: Establish a committee for civil society organizations.
The Task Force shall also convene civil society organizations to help form the AOH Hub steering committee and gather a centralized set of resources. Victims will be able to access a centralized hotline and information page, and Hub personnel will then triage reports and direct victims to resources most helpful for their particular situation. This should reduce the burden on those who are targets of harassment campaigns to find the appropriate organizations that can help address their issues by matching incidents to appropriate resources.
To create the AOH Hub, members of the Task Force can map out civil society stakeholders in the space and solicit applications to achieve comprehensive and equitable representation across sectors. Relevant organizations include organizations/actors working on (but not limited to):
- Combating domestic violence and intimate partner violence
- Addressing technology-facilitated gender based violence (TF-GBV)
- Developing online tools for survivors of harassment to protect themselves
- Conducting policy work to improve policies on harassment
- Providing mental health support for survivors of harassment
- Servicing pro bono or other forms of legal assistance for survivors of harassment
- Connecting tech company representatives with survivors of harassment
- Researching methods to address online harassment and abuse
The Task Force will convene an initial meeting, during which core members will be selected to create an advisory board, act as a liaison across members, and conduct hiring for the personnel needed to redirect victims to needed services. Other secondary members will take part in collaboratively mapping out and sharing available resources, in order to understand where efforts overlap and complement each other. These resources will be consolidated, reviewed, and published as a public database of resources within a year of the group’s formation.
For secondary members, their primary obligation will be to connect with victims who have been recommended to their services. Core members, meanwhile, will meet quarterly to evaluate gaps in services and assistance provided and examine what more needs to be done to continue growing the robustness of services and aid provided.
Recommendation 3: Convene committee for industry.
After its formation, the AOH steering committee will be responsible for conducting outreach with industry partners to identify a designated team from each company best equipped to address issues pertaining to online abuse. After the first year of formation, the industry committee will provide operational reporting on existing measures within each company to address online harassment and examine gaps in existing approaches. Committee dialogue should also aim to create standardized responses to harassment incidents across industry actors and understandings of how to best uphold community guidelines and terms of service. This reporting will also create a framework for standardized best practices for data collection, in terms of the information collected on flagged cases of online harassment.
On a day-to-day basis, industry teams will be available resources for the hub, and cases can be redirected to these teams to provide person-to-person support for handling cases of harassment that require a personalized level of assistance and scale. This committee will aim to increase transparency regarding the reporting process and improve equity in responses to online harassment.
Recommendation 4: Gather committees to provide long-term recommendations for policy change.
On a yearly basis, representatives across the three committees will convene and share insights on existing measures and takeaways. These recommendations will be given to the Task Force and other relevant stakeholders, as well as be accessible by the general public. Three years after the formation of these committees, the groups will publish a report centralizing feedback and takeaway from all committees, and provide recommendations of improvement for moving forward.
Recommendation 5: Create a data-collection mechanism and standard reporting procedures.
The database will be run and maintained by the steering committee with support from the U.S. Digital Service, with funding from the Task Force for its initial development. The data collection mechanism will be informed by the frameworks provided by the committees that compose the Hub to create a trauma-informed and victim-centered framework surrounding the collection, protection, and use of the contained data. The database will be periodically reviewed by the steering committee to ensure that the nature and scope of data collection is necessary and respects the privacy of those whose data it contains. Stakeholders can use this data to analyze and provide evidence of the scale and cross-cutting nature of online harassment and abuse. The database would be populated using a standardized reporting form containing (1) details of the incident; (2) basic demographic data of the victim; (3) platform/means through which the incident occurred; (4) whether it is part of a larger organized campaign; (5) current status of the incident (e.g., whether a message was taken down, an account was suspended, the report is still ongoing); (6) categorization within existing proposed taxonomies indicating the type of abuse. This standardization of data collection would allow advocates to build cases regarding structured campaigns of abuse with well-documented evidence, and the database will archive and collect data across incidents to ensure accountability even if the originals are lost or removed.
The reporting form will be available online through the AOH Hub. Anyone with evidence of online harassment will be able to contribute to the database, including but not limited to victims of abuse, bystanders, researchers, civil society organizations, and platforms. To protect the privacy and safety of targets of harassment, this data will not be publicly available. Access will be limited to: (1) members of the Hub and its committees; (2) affiliates of the aforementioned members; (3) researchers and other stakeholders, after submitting an application stating reasons to access the data, plans for data use, and plans for maintaining data privacy and security. Published reports using data from this database will be nonidentifiable, such as with statistics being published in aggregate, and not be able to be linked back to individuals without express consent.
This database is intended to provide data to inform the committees in and partners of the Hub of the existing landscape of technology-facilitated abuse and violence. The large-scale, cross-domain, and cross-platform nature of the data collected will allow for better understanding and analysis of trends that may not be clear when analyzing specific incidents, and provide evidence regarding disproportionate harms to particular communities (such as women, people of color, LGBTQ+ individuals). Resources permitting, the Hub could also survey those who have been impacted by online abuse and harassment to better understand the needs of victims and survivors. This data aims to provide evidence for and help inform the recommendations made from the committees to the Task Force for policy change and further interventions.
Recommendation 6: Improve law enforcement support.
Law enforcement is often ill-equipped to handle issues of technology-facilitated abuse and violence. To address this, Congress should allocate funding for the Hub to create training materials for law enforcement nationwide. The developed materials will be added to training manuals and modules nationwide, to ensure that 911 operators and officers are aware of how to handle cases of online harassment and how state and federal law can apply to a range of scenarios. As part of the training, operators will also be notified to add records of 911 calls regarding online harassment to the Hub database, with the survivor’s consent.
Conclusion
As technology-facilitated violence and abuse proliferates, we call for funding to create a steering committee in which experts and stakeholders from civil society, academia, industry, and government can collaborate on monitoring and regulating online harassment across sectors and incidents. The resulting Anti-Online Harassment Hub would maintain a data-collection mechanism accessible to researchers to better understand online harassment as well as provide accountability for social media platforms to address the issue. Finally, the Hub would provide accessible resources for targets of harassment in a fashion that would reduce the burden on these individuals. Implementing these measures would create a safer online space where survivors are able to easily access the support they need and establish a basis for evidence-based, longer-term policy change.
Platform policies on hate and harassment differ in the redress and resolution they offer. Twitter’s proactive removal of racist abuse toward members of the England football team after the UEFA Euro 2020 Finals shows that it is technically feasible for abusive content to be proactively detected and removed by the platforms themselves. However, this appears to only be for high-profile situations or for well-known individuals. For the general public, the burden of dealing with abuse usually falls to the targets to report messages themselves, even as they are in the midst of receiving targeted harassment and threats. Indeed, the current processes for reporting incidents of harassment are often opaque and confusing. Once a report is made, targets of harassment have very little control over the resolution of the report or the speed at which it is addressed. Platforms also have different policies on whether and how a user is notified after a moderation decision is made. A lot of these notifications are also conducted through automated systems with no way to appeal, leaving users with limited means for recourse.
Recent years have seen an increase in efforts to combat online harassment. Most notably, in June 2022, Vice President Kamala Harris launched a new White House Task Force to Address Online Harassment and Abuse, co-chaired by the Gender Policy Council and the National Security Council. The Task Force aims to develop policy solutions to enhance accountability of perpetrators of online harm while expanding data collection efforts and increasing access to survivor-centered services. In March 2022, the Biden-Harris Administration also launched the Global Partnership for Action on Gender-Based Online Harassment and Abuse, alongside Australia, Denmark, South Korea, Sweden, and the United Kingdom. The partnership works to advance shared principles and attitudes toward online harassment, improve prevention and response measures to gender-based online harassment, and expand data and access on gender-based online harassment.
Efforts focus on technical interventions, such as tools that increase individuals’ digital safety, automatically blur out slurs, or allow trusted individuals to moderate abusive messages directed towards victims’ accounts. There are also many guides that walk individuals through how to better manage their online presence or what to do in response to being targeted. Other organizations provide support for those who are victims and provide next steps, help with reporting, and information on better security practices. However, due to resource constraints, organizations may only be able to support specific types of targets, such as journalists, victims of intimate partner violence, or targets of gendered disinformation. This increases the burden on victims to find support for their specific needs. Academic institutions and researchers have also been developing tools and interventions that measure and address online abuse or improve content moderation. While there are increasing collaborations between academics and civil society, there are still gaps that prevent such interventions from being deployed to their full efficacy.
While complete privacy and security is extremely different to ensure in a technical sense, we envision a database design that preserves data privacy while maintaining its usability. First, the fields of information required for filing an incident report form would minimize the amount of personally identifiable information collected. As some data can be crowdsourced from the public and external observers, this part of the dataset would consist of existing public data. Nonpublicly available data would be entered by only individuals who are sharing incidents that are targeting them (e.g., direct messages), and individuals would be allowed to choose whether it is visible in the database or only shown in summary statistics. Furthermore, the data collection methods and the database structure will be periodically reviewed by the steering committee of civil society organizations, who will make recommendations for improvement as needed.
Data collection and reporting can be conducted internationally, as we recognize that limiting data collection to the U.S. will also undermine our goals of intersectionality. However, the hotline will likely have more comprehensive support for U.S.-based issues. In the long run, however, efforts can also be expanded internationally, as a cross-collaborative effort across multinational governments.
Creating a Fair Work Ombudsman to Bolster Protections for Gig Workers
To increase protections for fair work, the U.S. Department of Labor (DOL) should create an Office of the Ombudsman for Fair Work. Gig workers are a category of non-employee contract workers who engage in on-demand work, often through online platforms. They have had historic vulnerabilities in the U.S. economy. A large portion of gig workers are people of color, and the nature of their temporary and largely unregulated work can leave them vulnerable to economic instability and workplace abuse. Currently, there is no federal mechanism to protect gig workers, and state-level initiatives have not offered thorough enough policy redress. Establishing an Office of the Ombudsman would provide the Department of Labor with a central entity to investigate worker complaints against gig employers, collect data and evidence about the current gig economy, and provide education to gig workers about their rights. There is strong precedent for this policy solution, since bureaus across the federal government have successfully implemented ombudsmen that are independent and support vulnerable constituents. To ensure its legal and long-lasting status, the Secretary of Labor should establish this Office in an act of internal agency reorganization.
Challenge and Opportunity
The proportion of the U.S. workforce engaging in gig work has risen steadily in the past few decades, from 10.1% in 2005 to 15.8% in 2015 to roughly 20% in 2018. Since the COVID-19 pandemic began, this trend has only accelerated, and a record number of Americans have now joined the gig economy and rely on its income. In a 2021 Pew Research study, over 16% of Americans reported having made money through online platform work alone, such as on apps like Uber and Doordash, which is merely a subset of gig work jobs. Gig workers in particular are more likely to be Black or Latino compared to the overall workforce.
Though millions of Americans rely on gig work, it does not provide critical employee benefits, such as minimum wage guarantees, parental leave, healthcare, overtime, unemployment insurance, or recourse for injuries incurred during work. According to an NPR survey, in 2018 more than half of contract workers received zero benefits through work. Further, the National Labor Relations Act, which protects employees’ rights to unionize and collectively bargain without retaliation, does not protect gig workers. This lack of benefits, rights, and voice leaves millions of workers more vulnerable than full-time employees to predatory employers, financial instability, and health crises, particularly during emergencies—such as the COVID-19 pandemic.
Additionally, in 2022, inflation reached a decades-long high, and though the price of necessities has spiked, wages have not increased correspondingly. Extreme inflation hurts lower-income workers without savings the most and is especially dangerous to gig workers, some of whom make less than the federal minimum hourly wage and whose income and work are subject to constant flux.
State-level measures have as yet failed to create protections for all gig workers. In 2020, California passed AB5, legally reclassifying many gig workers as employees instead of independent contractors and thus entitling them to more benefits and protections. But further bills and Proposition 22 reverted several groups of gig workers, including online platform gig workers like Uber and Doordash drivers, to being independent contractors. Ongoing litigation related to Proposition 22 leaves the future status of online platform gig workers in California unclear. In 2022, Washington State passed ESHB 2076 guaranteeing online platform workers—but not all gig workers—the benefits of full-time employees.
This sparse patchwork of state-level measures, which only supports subgroups of gig workers, could trigger a “race to the bottom” in which employers of gig workers relocate to less strict states. Additionally, inconsistencies between state laws make it harder for gig workers to understand their rights and gain redress for grievances, harder for businesses to determine with certainty their duties and liabilities, and harder for states to enforce penalties when an employer is headquartered in one state and the gig worker lives in another. The status quo is also difficult for businesses that strive to be better employers because it creates downward pressure on the entire landscape of labor market competition. Ultimately, only federal policy action can fully address these inconsistencies and broadly increase protections and benefits for all gig workers.
The federal ombudsman’s office outlined in this proposal can serve as a resource for gig workers to understand the scope of their current rights, provide a voice to amplify their grievances and harms, and collect data and evidence to inform policy proposals. It is the first step toward a sustainable and comprehensive national solution that expands the rights of gig workers.
Specifically, clarifying what rights, benefits, and means of recourse gig workers do and do not have would help gig workers better plan for healthcare and other emergent needs. It would also allow better tracking of trends in the labor market and systemic detection of employee misclassification. Hearing gig workers’ complaints in a centralized office can help the Department of Labor more expeditiously address gig workers’ concerns in situations where they legally do have recourse and can otherwise help the Department of Labor better understand the needs of and harms experienced by all workers. Collecting broad-ranging data on gig workers in particular could help inform federal policy change on their rights and protections. Currently, most datasets are survey based and often leave out people who were not working a gig job at the time the survey was conducted but typically otherwise do. More broadly, because of its informal and dynamic nature, the gig economy is difficult to accurately count and characterize, and an entity that is specifically charged with coordinating and understanding this growing sector of the market is key.
Lastly, employees who are not gig workers are sometimes misclassified as such and thus lose out on benefits and protections they are legally entitled to. Having a centralized ombudsman office dedicated to gig work could expedite support of gig workers seeking to correct their classification status, which the Wage and Hour Division already generally deals with, as well as help the Department of Labor and other agencies collect data to clarify the scope of the problem.
Plan of Action
The Department of Labor should establish an Office of the Ombudsman for Fair Work. This office should be independent of Department of Labor agencies and officials, and it should report directly to the Secretary of Labor. The Office would operate on a federal level with authority over states.
The Secretary of Labor should establish the Office in an act of internal agency reorganization. By establishing the Office such that its powers do not contradict the Department of Labor’s statutory limitations, the Secretary can ensure the Office’s status as legal and long-lasting, due to the discretionary power of the Department to interpret its statutes.
The role of the Office of the Ombudsman for Fair Work would be threefold: to serve as a centralized point of contact for hearing complaints from gig workers; to act as a central resource and conduct outreach to gig workers about their rights and protections; and to collect data such as demographic, wage, and benefit trends on the labor practices of the gig economy. Together, these responsibilities ensure that this Office consolidates and augments the actions of the Department of Labor as they pertain to workers in the gig economy, regardless of their classification status.
The functions of the ombudsman should be as follows:
- Establish a clear and centralized mechanism for hearing, collating, and investigating complaints from workers in the gig economy, such as through a helpline or mobile app.
- Establish and administer an independent, neutral, and confidential process to receive, investigate, resolve, and provide redress for cases in which employers misrepresent to individuals that they are engaged as independent contractors when they’re actually engaged as employees.
- Commence court proceedings to enforce fair work practices and entitlements, as they pertain to workers in the gig economy, in conjunction with other offices in the DOL.
- Represent employees or contractors who are or may become a party to proceedings in court over unfair contracting practices, including but not limited to misclassification as independent contractors. The office would refer matters to interagency partners within the Department of Labor and across other organizations engaged in these proceedings, augmenting existing work where possible.
- Provide education, assistance, and advice to employees, employers, and organizations, including best practice guides to workplace relations or workplace practices and information about rights and protections for workers in the gig economy.
- Conduct outreach in multiple languages to gig economy workers informing them of their rights and protections and of the Office’s role to hear and address their complaints and entitlements.
- Serve as the central data collection and publication office for all gig-work-related data. The Office will publish a yearly report detailing demographic, wage, and benefit trends faced by gig workers. Data could be collected through outreach to gig workers or their employers, or through a new data-sharing agreement with the Internal Revenue Service (IRS). This data report would also summarize anonymized trends based on the complaints collected (as per function 1), including aggregate statistics on wage theft, reports of harassment or discrimination, and misclassification. These trends would also be broken down by demographic group to proactively identify salient inequities. The office may also provide separate data on platform workers, which may be easier to collect and collate, since platform workers are a particular subject of focus in current state legislation and litigation.
Establishing an Office of the Ombudsman for Fair Work within the Department of Labor will require costs of compensation for the ombudsman and staff, other operational costs, and litigation expenses. To reflect the need for a reaction to the rapid ongoing changes in gig economy platforms, a small portion of the Office’s budget should be set aside to support the appointment of a chief innovation officer, aimed at examining how technology can strengthen its operations. Some examples of tasks for this role include investigating and strengthening complaint sorting infrastructure, utilizing artificial intelligence to evaluate contracts for misclassification, and streamlining request for proposal processes.
Due to the continued growth of the gig economy, and the precarious status of gig workers in the onset of an economic recession, this Office should be established in the nearest possible window. Establishing, appointing, and initiating this office will require up to a year of time, and will require budgeting within the DOL.
There are many precedents of ombudsmen in federal office, including the Office of the Ombudsman for the Energy Employees Occupational Illness Compensation Program within the Department of Labor. Additionally, the IRS established the Office of the Taxpayer Advocate, and the Department of Homeland Security has both a Citizenship and Immigration Services Ombudsman and an Immigration Detention Ombudsman. These offices have helped educate constituents about their rights, resolved issues that an individual might have with that federal agency, and served as independent oversight bodies. The Australian Government has a Fair Work Ombudsman that provides resources to differentiate between an independent contractor and employee and investigates employers who may be engaging in sham contracting or other illegal practices. Following these examples, the Office of the Ombudsman for Fair Work should work within the Department of Labor to educate, assist, and provide redress for workers engaged in the gig economy.
Conclusion
How to protect gig workers is a long-standing open question for labor policy and is likely to require more attention as post-pandemic conditions affect labor trends. The federal government needs a solution to the issues of vulnerability and instability experienced by gig workers, and this solution needs to operate independently of legislation that may take longer to gain consensus on. Establishing an office of an ombudsman is the first step to increase federal oversight for gig work. The ombudsman will use data, reporting, and individual worker cases to build a clearer picture for how to create redress for laborers that have been harmed by gig work, which will provide greater visibility into the status and concerns of gig workers. It will additionally serve as a single point of entry for gig workers and businesses to learn about their rights and for gig workers to lodge complaints. If made a reality, this office will be an influential first step in changing the entire policy ecosystem regarding gig work.
There is a current definitional debate about whether gig workers and platform workers are employees or contractors. Until this issue of misclassification can be resolved, there will likely not be a comprehensive state or federal policy governing gig work. However, the office of an ombudsman would be able to serve as the central point within the Department of Labor to handle gig worker issues, and it would be the entity tasked with collecting and publishing data about this class of laborers. This would help elevate the problems gig workers face as well as paint a picture of the extent of the issue for future legislation.
Each ombudsman will be appointed for a six-year period, to ensure insulation from partisan politics.
States often do not have adequate solutions to handle the discrepancies between employees and contractors. There is also the “race to the bottom” issue, where if protections are increased in one state, gig employers will simply relocate to states where the policies are less stringent. Further, there is the issue of gig companies being headquartered in one state while employees work in another. It makes sense for the Department of Labor to house a central, federal mechanism to handle gig work.
The key challenge right now is for the federal government to collect data and solve issues regarding protections for gig work. The office of the ombudsman’s broadly defined mandate is actually an advantage in this still-developing conversation about gig work.
Establishing a new Department of Labor office is no small feat. It requires a clear definition of the goal and allowed activities of the ombudsman. This would require buy-in from key DOL bureaucrats. The office would also have to hire, recruit, and train staff. These tasks may be speed bottlenecks for this proposal to get off the ground. Since DOL plans its budget several years in advance, this proposal would likely be targeted for the 2026 cycle.
Establishing an AI Center of Excellence to Address Maternal Health Disparities
Maternal mortality is a crisis in the United States. Yet more than 60% of maternal deaths are preventable—with the right evidence-based interventions. Data is a powerful tool for uncovering best care practices. While healthcare data, including maternal health data, has been generated at a massive scale by the widespread adoption and use of Electronic Health Records (EHR), much of this data remains unstandardized and unanalyzed. Further, while many federal datasets related to maternal health are openly available through initiatives set forth in the Open Government National Action Plan, there is no central coordinating body charged with analyzing this breadth of data. Advancing data harmonization, research, and analysis are foundational elements of the Biden Administration’s Blueprint for Addressing the Maternal Health Crisis. As a data-driven technology, artificial intelligence (AI) has great potential to support maternal health research efforts. Examples of promising applications of AI include using electronic health data to predict whether expectant mothers are at risk of difficulty during delivery. However, further research is needed to understand how to effectively implement this technology in a way that promotes transparency, safety, and equity. The Biden-Harris Administration should establish an AI Center of Excellence to bring together data sources and then analyze, diagnose, and address maternal health disparities, all while demonstrating trustworthy and responsible AI principles.
Challenge and Opportunity
Maternal deaths currently average around 700 per year, and severe maternal morbidity-related conditions impact upward of 60,000 women annually. Stark maternal health disparities persist in the United States, and pregnancy outcomes are subject to substantial racial/ethnic disparities, including maternal morbidity and mortality. According to the Centers for Disease Control and Prevention (CDC), “Black women are three times more likely to die from a pregnancy-related cause than White women.” Research is ongoing to specifically identify the root causes, which include socioeconomic factors such as insurance status, access to healthcare services, and risks associated with social determinants of health. For example, maternity care deserts exist in counties throughout the country where maternal health services are substantially limited or not available, impacting an estimated 2.2 million women of child-bearing age.
Many federal, public, and private datasets exist to understand the conditions that impact pregnant people, the quality of the care they receive, and ultimate care outcomes. For example, the CDC collects abundant data on maternal health, including the Pregnancy Mortality Surveillance System (PMSS) and the National Vital Statistics System (NVSS). Many of these datasets, however, have yet to be analyzed at scale or linked to other federal or privately held data sources in a comprehensive way. More broadly, an estimated 30% of the data generated globally is produced by the healthcare industry. AI is uniquely designed for data management, including cataloging, classification, and data integration. AI will play a pivotal role in the federal government’s ability to process an unprecedented volume of data to generate evidence-based recommendations to improve maternal health outcomes.
Applications of AI have rapidly proliferated throughout the healthcare sector due to their potential to reduce healthcare expenditures and improve patient outcomes (Figure 1). Several applications of this technology exist across the maternal health continuum and are shown in the figure below. For example, evidence suggests that AI can help clinicians identify more than 70% of at-risk moms during the first trimester by analyzing patient data and identifying patterns associated with poor health outcomes. Based on its findings, AI can provide recommendations for which patients will most likely be at-risk for pregnancy challenges before they occur. Research has also demonstrated the use of AI in fetal health monitoring.
Yet for all of AI’s potential, there is a significant dearth of consumer and medical provider understanding of how these algorithms work. Policy analysts argue that “algorithmic discrimination” and feedback loops in algorithms—which may exacerbate algorithmic bias—are potential risks of using AI in healthcare outside of the confines of an ethical framework. In response, certain federal entities such as the Department of Defense, the Office of the Director of National Intelligence, the National Institute for Standards and Technology, and the U.S. Department of Health and Human Services have published and adopted guidelines for implementing data privacy practices and building public trust of AI. Further, past Day One authors have proposed the establishment of testbeds for government-procured AI models to provide services to U.S. citizens. This is critical for enhancing the safety and reliability of AI systems while reducing the risk of perpetuating existing structural inequities.
It is vital to demonstrate safe, trustworthy uses of AI and measure the efficacy of these best practices through applications of AI to real-world societal challenges. For example, potential use cases of AI for maternal health include a social determinants of health [SDoH] extractor, which combines AI with clinical notes to more effectively identify SDoH information and analyze its potential role in health inequities. A center dedicated to ethically developing AI for maternal health would allow for the development of evidence-based guidelines for broader AI implementation across healthcare systems throughout the country. Lessons learned from this effort will contribute to the knowledge base around ethical AI and enable development of AI solutions for health disparities more broadly.
Plan of Action
To meet the calls for advancing data collection, standardization, transparency, research, and analysis to address the maternal health crisis, the Biden-Harris Administration should establish an AI Center of Excellence for maternal health. The AI Center of Excellence for Maternal Health will bring together data sources, then analyze, diagnose, and address maternal health disparities, all while demonstrating trustworthy and responsible AI principles. The Center should be created within the Department of Health and Human Services (HHS) and work closely with relevant offices throughout HHS and beyond, including the HHS Office of the Chief Artificial Intelligence Officer (OCAIO), the National Institutes of Health (NIH) IMPROVE initiative, the CDC, the Veterans Health Administration (VHA), and the National Institute for Standards and Technology (NIST). The Center should offer competitive salaries to recruit the best and brightest talent in AI, human-centered design, biostatistics, and human-computer interaction.
The first priority should be to work with all agencies tasked by the White House Blueprint for Addressing the Maternal Health Crisis to collect and evaluate data. This includes privately held EHR data that is made available through the Qualified Health Information Network (QHIN) and federal data from the CDC, Centers for Medicare and Medicaid (CMS), Office of Personnel Management (OPM), Healthcare Resources and Services Agency (HRSA), NIH, United States Department of Agriculture (USDA), Housing and Urban Development (HUD), the Veterans Health Administration, and Environmental Protection Agency (EPA), all of which contain datasets relevant to maternal health at different stages of the reproductive health journey from Figure 1. The Center should serve as a data clearing and cleaning shop, preparing these datasets using best practices for data management, preparation, and labeling.
The second priority should be to evaluate existing datasets to establish high-priority, high-impact applications of AI-enabled research for improving clinical care guidelines and tools for maternal healthcare providers. These AI demonstrations should be aligned with the White House’s Action Plan and be focused on implementing best practices for AI development, such as the AI Risk Management Framework developed by NIST. The following examples demonstrate how AI might help address maternal health disparities, based on priority areas informed by clinicians in the field:
- AI implementation should be explored for analysis of electronic health records from the VHA and QHIN to predict patients who have a higher risk of pregnancy and/or delivery complications.
- Drawing on the robust data collection and patient surveillance capabilities of the VHA and HRSA, AI should be explored for the deployment of digital tools to help monitor patients during pregnancy to ensure adequate and consistent use of prenatal care.
- Using VHA data and QHIN data, AI should be explored in supporting patient monitoring in instances of patient referrals and/or transfers to hospitals that are appropriately equipped to serve high-risk patients, following guidelines provided by the American College of Obstetricians and Gynecologists.
- Data on housing from HUD, rural development from the USDA, environmental health from the EPA, and social determinants of health research from the CDC should be connected to risk factors for maternal mortality in the academic literature to create an AI-powered risk algorithm.
- Understand the power of payment models operated by CMS and OPM for novel strategies to enhance maternal health outcomes and reduce maternal deaths.
The final priority should be direct translation of the findings from AI to federal policymaking around reducing maternal health disparities as well as ethical development of AI tools. Research findings for both aspects of this interdisciplinary initiative should be framed using Living Evidence models that help ensure that research-derived evidence and guidance remain current.
The Center should be able to meet the following objectives within the first year after creation to further the case for future federal funding and creation of more AI Centers of Excellence for healthcare:
- Conduct a study on the use cases uncovered for AI to help address maternal health disparities explored through the various demonstration projects.
- Publish a report of study findings, which should be submitted to Congress with recommendations to help inform funding priorities for subsequent research activities.
- Make study findings available to the public to help build public trust in AI.
Successful piloting of the Center could be made possible by passage of an equivalent bill to S.893 in the current Congress. This is a critical first step in supporting this work. In March 2021, the S.893—Tech to Save Moms Act was introduced in the Senate to fund research conducted by National Academies of Sciences, Engineering, and Medicine to understand the role of AI in maternal care delivery and its impact on bias in maternal health. Passage of an equivalent bill into law would enable the National Academies of Sciences, Engineering, and Medicine to conduct research in parallel with HHS to generate more findings and to broaden potential impact.
Conclusion
The United States has the highest rate of maternal health disparities among all developed countries. Yet more than 60% of pregnancy-related deaths are preventable, highlighting a critical opportunity to uncover the factors impeding more equitable health outcomes for the nation as a whole. Legislative support for research to understand AI’s role in addressing maternal health disparities will affirm the nation’s commitment to ensuring that we are prepared to thrive in a 21st century influenced and shaped by next-generation technologies such as artificial intelligence.
Creating Auditing Tools for AI Equity
The unregulated use of algorithmic decision-making systems (ADS)—systems that crunch large amounts of personal data and derive relationships between data points—has negatively affected millions of Americans. These systems impact equitable access to education, housing, employment, and healthcare, with life-altering effects. For example, commercial algorithms used to guide health decisions for approximately 200 million people in the United States each year were found to systematically discriminate against Black patients, reducing, by more than half, the number of Black patients who were identified as needing extra care.
One way to combat algorithmic harm is by conducting system audits, yet there are currently no standards for auditing AI systems at the scale necessary to ensure that they operate legally, safely, and in the public interest. According to one research study examining the ecosystem of AI audits, only one percent of AI auditors believe that current regulation is sufficient.
To address this problem, the National Institute of Standards and Technology (NIST) should invest in the development of comprehensive AI auditing tools, and federal agencies with the charge of protecting civil rights and liberties should collaborate with NIST to develop these tools and push for comprehensive system audits.
These auditing tools would help the enforcement arms of these federal agencies save time and money while fulfilling their statutory duties. Additionally, there is a pressing need to develop these tools now, with Executive Order 13985 instructing agencies to “focus their civil rights authorities and offices on emerging threats, such as algorithmic discrimination in automated technology.”
Challenge and Opportunity
The use of AI systems across all aspects of life has become commonplace as a way to improve decision-making and automate routine tasks. However, their unchecked use can perpetuate historical inequities, such as discrimination and bias, while also potentially violating American civil rights.
Algorithmic decision-making systems are often used in prioritization, classification, association, and filtering tasks in a way that is heavily automated. ADS become a threat when people uncritically rely on the outputs of a system, use them as a replacement for human decision-making, or use systems with no knowledge of how they were developed. These systems, while extremely useful and cost-saving in many circumstances, must be created in a way that is equitable and secure.
Ensuring the legal and safe use of ADS begins with recognizing the challenges that the federal government faces. On the one hand, the government wants to avoid devoting excessive resources to managing these systems. With new AI system releases happening everyday, it is becoming unreasonable to oversee every system closely. On the other hand, we cannot blindly trust all developers and users to make appropriate choices with ADS.
This is where tools for the AI development lifecycle come into play, offering a third alternative between constant monitoring and blind trust. By implementing auditing tools and signatory practices, AI developers will be able to demonstrate compliance with preexisting and well-defined standards while enhancing the security and equity of their systems.
Due to the extensive scope and diverse applications of AI systems, it would be difficult for the government to create a centralized body to oversee all systems or demand each agency develop solutions on its own. Instead, some responsibility should be shifted to AI developers and users, as they possess the specialized knowledge and motivation to maintain proper functioning systems. This allows the enforcement arms of federal agencies tasked with protecting the public to focus on what they do best, safeguarding citizens’ civil rights and liberties.
Plan of Action
To ensure security and verification throughout the AI development lifecycle, a suite of auditing tools is necessary. These tools should help enable outcomes we care about, fairness, equity, and legality. The results of these audits should be reported (for example, in an immutable ledger that is only accessible by authorized developers and enforcement bodies) or through a verifiable code-signing mechanism. We leave the specifics of the reporting and documenting the process to the stakeholders involved, as each agency may have different reporting structures and needs. Other possible options, such as manual audits or audits conducted without the use of tools, may not provide the same level of efficiency, scalability, transparency, accuracy, or security.
The federal government’s role is to provide the necessary tools and processes for self-regulatory practices. Heavy-handed regulations or excessive government oversight are not well-received in the tech industry, which argues that they tend to stifle innovation and competition. AI developers also have concerns about safeguarding their proprietary information and users’ personal data, particularly in light of data protection laws.
Auditing tools provide a solution to this challenge by enabling AI developers to share and report information in a transparent manner while still protecting sensitive information. This allows for a balance between transparency and privacy, providing the necessary trust for a self-regulating ecosystem.
The equity tool and process, funded and developed by government agencies such as NIST, would consist of a combination of (1) AI auditing tools for security and fairness (which could be based on or incorporate open source tools such as AI Fairness 360 and the Adversarial Robustness Toolbox), and (2) a standardized process and guidance for integrating these checks (which could be based on or incorperate guidance such as the U.S. Government Accountability Office’s Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities).1
Dioptra, a recent effort between NIST and the National Cybersecurity Center of Excellence (NCCoE) to build machine learning testbeds for security and robustness, is an excellent example of the type of lifecycle management application that would ideally be developed. Failure to protect civil rights and ensure equitable outcomes must be treated as seriously as security flaws, as both impact our national security and quality of life.
Equity considerations should be applied across the entire lifecycle; training data is not the only possible source of problems. Inappropriate data handling, model selection, algorithm design, and deployment, also contribute to unjust outcomes. This is why tools combined with specific guidance is essential.
As some scholars note, “There is currently no available general and comparative guidance on which tool is useful or appropriate for which purpose or audience. This limits the accessibility and usability of the toolkits and results in a risk that a practitioner would select a sub-optimal or inappropriate tool for their use case, or simply use the first one found without being conscious of the approach they are selecting over others.”
Companies utilizing the various packaged tools on their ADS could sign off on the results using code signing. This would create a record that these organizations ran these audits along their development lifecycle and received satisfactory outcomes.
We envision a suite of auditing tools, each tool applying to a specific agency and enforcement task. Precedents for this type of technology already exist. Much like security became a part of the software development lifecycle with guidance developed by NIST, equity and fairness should be integrated into the AI lifecycle as well. NIST could spearhead a government-wide initiative on auditing AI tools, leading guidance, distribution, and maintenance of such tools. NIST is an appropriate choice considering its history of evaluating technology and providing guidance around the development and use of specific AI applications such as the NIST-led Face Recognition Vendor Test (FRVT).
Areas of Impact & Agencies / Departments Involved
Security & Justice
The U.S. Department of Justice, Civil Rights Division, Special Litigation SectionDepartment of Homeland Security U.S. Customs and Border Protection U.S. Marshals Service
Public & Social Sector
The U.S. Department of Housing and Urban Development’s Office of Fair Housing and Equal Opportunity
Education
The U.S. Department of Education
Environment
The U.S. Department of Agriculture, Office of the Assistant Secretary for Civil RightsThe Federal Energy Regulatory CommissionThe Environmental Protection Agency
Crisis Response
Federal Emergency Management Agency
Health & Hunger
The U.S. Department of Health and Human Services, Office for Civil RightsCenter for Disease Control and PreventionThe Food and Drug Administration
Economic
The Equal Employment Opportunity Commission, The U.S. Department of Labor, Office of Federal Contract Compliance Programs
Infrastructure
The U.S. Department of Transportation, Office of Civil RightsThe Federal Aviation AdministrationThe Federal Highway Administration
Information Verification & Validation
The Federal Trade Commission, The Federal Communication Commission, The Securities and Exchange Commission.
Many of these tools are open source and free to the public. A first step could be combining these tools with agency-specific standards and plain language explanations of their implementation process.
Benefits
These tools would provide several benefits to federal agencies and developers alike. First, they allow organizations to protect their data and proprietary information while performing audits. Any audits, whether on the data, model, or overall outcomes, would be run and reported by the developers themselves. Developers of these systems are the best choice for this task since ADS applications vary widely, and the particular audits needed depend on the application.
Second, while many developers may opt to use these tools voluntarily, standardizing and mandating their use would allow an evaluation of any system thought to be in violation of the law to be easily assesed. In this way, the federal government will be able to manage standards more efficiently and effectively.
Third, although this tool would be designed for the AI lifecycle that results in ADS, it can also be applied to traditional auditing processes. Metrics and evaluation criteria will need to be developed based on existing legal standards and evaluation processes; once these metrics are distilled for incorporation into a specific tool, this tool can be applied to non-ADS data as well, such as outcomes or final metrics from traditional audits.
Fourth, we believe that a strong signal from the government that equity considerations in ADS are important and easily enforceable will impact AI applications more broadly, normalizing these considerations.
Example of Opportunity
An agency that might use this tool is the Department of Housing and Urban Development (HUD), whose purpose is to ensure that housing providers do not discriminate based on race, color, religion, national origin, sex, familial status, or disability.
To enforce these standards, HUD, which is responsible for 21,000 audits a year, investigates and audits housing providers to assess compliance with the Fair Housing Act, the Equal Credit Opportunity Act, and other related regulations. During these audits, HUD may review a provider’s policies, procedures, and records, as well as conduct on-site inspections and tests to determine compliance.
Using an AI auditing tool could streamline and enhance HUD’s auditing processes. In cases where ADS were used and suspected of harm, HUD could ask for verification that an auditing process was completed and specific metrics were met, or require that such a process be undergone and reported to them.
Noncompliance with legal standards of nondiscrimination would apply to ADS developers as well, and we envision the enforcement arms of protection agencies would apply the same penalties in these situations as they would in non-ADS cases.
R&D
To make this approach feasible, NIST will require funding and policy support to implement this plan. The recent CHIPS and Science Act has provisions to support NIST’s role in developing “trustworthy artificial intelligence and data science,” including the testbeds mentioned above. Research and development can be partially contracted out to universities and other national laboratories or through partnerships/contracts with private companies and organizations.
The first iterations will need to be developed in partnership with an agency interested in integrating an auditing tool into its processes. The specific tools and guidance developed by NIST must be applicable to each agency’s use case.
The auditing process would include auditing data, models, and other information vital to understanding a system’s impact and use, informed by existing regulations/guidelines. If a system is found to be noncompliant, the enforcement agency has the authority to impose penalties or require changes to be made to the system.
Pilot program
NIST should develop a pilot program to test the feasibility of AI auditing. It should be conducted on a smaller group of systems to test the effectiveness of the AI auditing tools and guidance and to identify any potential issues or areas for improvement. NIST should use the results of the pilot program to inform the development of standards and guidelines for AI auditing moving forward.
Collaborative efforts
Achieving a self-regulating ecosystem requires collaboration. The federal government should work with industry experts and stakeholders to develop the necessary tools and practices for self-regulation.
A multistakeholder team from NIST, federal agency issue experts, and ADS developers should be established during the development and testing of the tools. Collaborative efforts will help delineate responsibilities, with AI creators and users responsible for implementing and maintaining compliance with the standards and guidelines, and agency enforcement arms agency responsible for ensuring continued compliance.
Regular monitoring and updates
The enforcement agencies will continuously monitor and update the standards and guidelines to keep them up to date with the latest advancements and to ensure that AI systems continue to meet the legal and ethical standards set forth by the government.
Transparency and record-keeping
Code-signing technology can be used to provide transparency and record-keeping for ADS. This can be used to store information on the auditing outcomes of the ADS, making reporting easy and verifiable and providing a level of accountability to users of these systems.
Conclusion
Creating auditing tools for ADS presents a significant opportunity to enhance equity, transparency, accountability, and compliance with legal and ethical standards. The federal government can play a crucial role in this effort by investing in the research and development of tools, developing guidelines, gathering stakeholders, and enforcing compliance. By taking these steps, the government can help ensure that ADS are developed and used in a manner that is safe, fair, and equitable.
Code signing is used to establish trust in code that is distributed over the internet or other networks. By digitally signing the code, the code signer is vouching for its identity and taking responsibility for its contents. When users download code that has been signed, their computer or device can verify that the code has not been tampered with and that it comes from a trusted source.
Code signing can be extended to all parts of the AI lifecycle as a means of verifying the authenticity, integrity, and function of a particular piece of code or a larger process. After each step in the auditing process, code signing enables developers to leave a well-documented trail for enforcement bodies/auditors to follow if a system were suspected of unfair discrimination or unsafe operation.
Code signing is not essential for this project’s success, and we believe that the specifics of the auditing process, including documentation, are best left to individual agencies and their needs. However, code signing could be a useful piece of any tools developed.
Additionally, there may be pushback on the tool design. It is important to remember that currently, engineers often use fairness tools at the end of a development process, as a last box to check, instead of as an integrated part of the AI development lifecycle. These concerns can be addressed by emphasizing the comprehensive approach taken and by developing the necessary guidance to accompany these tools—which does not currently exist.
New York regulators are calling on a UnitedHealth Group to either stop using or prove there is no problem with a company-made algorithm that researchers say exhibited significant racial bias. This algorithm, which UnitedHealth Group sells to hospitals for assessing the health risks of patients, assigned similar risk scores to white patients and Black patients despite the Black patients being considerably sicker.
In this case, researchers found that changing just one parameter could generate “an 84% reduction in bias.” If we had specific information on the parameters going into the model and how they are weighted, we would have a record-keeping system to see how certain interventions affected the output of this model.
Bias in AI systems used in healthcare could potentially violate the Constitution’s Equal Protection Clause, which prohibits discrimination on the basis of race. If the algorithm is found to have a disproportionately negative impact on a certain racial group, this could be considered discrimination. It could also potentially violate the Due Process Clause, which protects against arbitrary or unfair treatment by the government or a government actor. If an algorithm used by hospitals, which are often funded by the government or regulated by government agencies, is found to exhibit significant racial bias, this could be considered unfair or arbitrary treatment.
Example #2: Policing
A UN panel on the Elimination of Racial Discrimination has raised concern over the increasing use of technologies like facial recognition in law enforcement and immigration, warning that it can exacerbate racism and xenophobia and potentially lead to human rights violations. The panel noted that while AI can enhance performance in some areas, it can also have the opposite effect as it reduces trust and cooperation from communities exposed to discriminatory law enforcement. Furthermore, the panel highlights the risk that these technologies could draw on biased data, creating a “vicious cycle” of overpolicing in certain areas and more arrests. It recommends more transparency in the design and implementation of algorithms used in profiling and the implementation of independent mechanisms for handling complaints.
A case study on the Chicago Police Department’s Strategic Subject List (SSL) discusses an algorithm-driven technology used by the department to identify individuals at high risk of being involved in gun violence and inform its policing strategies. However, a study by the RAND Corporation on an early version of the SSL found that it was not successful in reducing gun violence or reducing the likelihood of victimization, and that inclusion on the SSL only had a direct effect on arrests. The study also raised significant privacy and civil rights concerns. Additionally, findings reveal that more than one-third of individuals on the SSL, approximately 70% of that cohort, have never been arrested or been a victim of a crime yet received a high-risk score. Furthermore, 56% of Black men under the age of 30 in Chicago have a risk score on the SSL. This demographic has also been disproportionately affected by the CPD’s past discriminatory practices and issues, including torturing Black men between 1972 and 1994, performing unlawful stops and frisks disproportionately on Black residents, engaging in a pattern or practice of unconstitutional use of force, poor data collection, and systemic deficiencies in training and supervision, accountability systems, and conduct disproportionately affecting Black and Latino residents.
Predictive policing, which uses data and algorithms to try to predict where crimes are likely to occur, has been criticized for reproducing and reinforcing biases in the criminal justice system. This can lead to discriminatory practices and violations of the Fourth Amendment’s prohibition on unreasonable searches and seizures, as well as the Fourteenth Amendment’s guarantee of equal protection under the law. Additionally, bias in policing more generally can also violate these constitutional provisions, as well as potentially violating the Fourth Amendment’s prohibition on excessive force.
Example #3: Recruiting
ADS in recruiting crunch large amounts of personal data and, given some objective, derive relationships between data points. The aim is to use systems capable of processing more data than a human ever could to uncover hidden relationships and trends that will then provide insights for people making all types of difficult decisions.
Hiring managers across different industries use ADS every day to aid in the decision-making process. In fact, a 2020 study reported that 55% of human resources leaders in the United States use predictive algorithms across their business practices, including hiring decisions.
For example, employers use ADS to screen and assess candidates during the recruitment process and to identify best-fit candidates based on publicly available information. Some systems even analyze facial expressions during interviews to assess personalities. These systems promise organizations a faster, more efficient hiring process. ADS do theoretically have the potential to create a fairer, qualification-based hiring process that removes the effects of human bias. However, they also possess just as much potential to codify new and existing prejudice across the job application and hiring process.
The use of ADS in recruiting could potentially violate several constitutional laws, including discrimination laws such as Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act. These laws prohibit discrimination on the basis of race, gender, and disability, among other protected characteristics, in the workplace. Additionally, the these systems could also potentially violate the right to privacy and the due process rights of job applicants. If the systems are found to be discriminatory or to violate these laws, they could result in legal action against the employers.
Supporting Historically Disadvantaged Workers through a National Bargaining in Good Faith Fund
Black, Indigenous, and other people of color (BIPOC) are underrepresented in labor unions. Further, people working in the gig economy, tech supply chain, and other automation-adjacent roles face a huge barrier to unionizing their workplaces. These roles, which are among the fastest-growing segments of the U.S. economy, are overwhelmingly filled by BIPOC workers. In the absence of safety nets for these workers, the racial wealth gap will continue to grow. The Biden-Harris Administration can promote racial equity and support low-wage BIPOC workers’ unionization efforts by creating a National Bargaining in Good Faith Fund.
As a whole, unions lift up workers to a better standard of living, but historically they have failed to protect workers of color. The emergence of labor unions in the early 20th century was propelled by the passing of the National Labor Relations Act (NLRA), also known as the Wagner Act of 1935. Although the NLRA was a beacon of light for many working Americans, affording them the benefits of union membership such as higher wages, job security, and better working conditions, which allowed many to transition into the middle class, the protections of the law were not applied to all working people equally. Labor unions in the 20th century were often segregated, and BIPOC workers were often excluded from the benefits of unionization. For example, the Wagner Act excluded domestic and agricultural workers and permitted labor unions to discriminate against workers of color in other industries, such as manufacturing.
Today, in the aftermath of the COVID-19 pandemic and amid a renewed interest in a racial reckoning in the United States, BIPOC workers—notably young and women BIPOC workers—are leading efforts to organize their workplaces. In addition to demanding wage equity and fair treatment, they are also fighting for health and safety on the job. Unionized workers earn on average 11.2% more in wages than their nonunionized peers. Unionized Black workers earn 13.7% more and unionized Hispanic workers 20.1% more than their nonunionized peers. But every step of the way, tech giants and multinational corporations are opposing workers’ efforts and their legal right to organize, making organizing a risky undertaking.
A National Bargaining in Good Faith Fund would provide immediate and direct financial assistance to workers who have been retaliated against for attempting to unionize, especially those from historically disadvantaged groups in the United States. This fund offers a simple and effective solution to alleviate financial hardships, allowing affected workers to use the funds for pressing needs such as rent, food, or job training. It is crucial that we advance racial equity, and this fund is one step toward achieving that goal by providing temporary financial support to workers during their time of need. Policymakers should support this initiative as it offers direct payments to workers who have faced illegal retaliation, providing a lifeline for historically disadvantaged workers and promoting greater economic justice in our society.
Challenges and Opportunities
The United States faces several triangulating challenges. First is our rapidly evolving economy, which threatens to displace millions of already vulnerable low-wage workers due to technological advances and automation. The COVID-19 pandemic accelerated automation, which is a long-term strategy for the tech companies that underpin the gig economy. According to a report by an independent research group, self-driving taxis are likely to dominate the ride-hailing market by 2030, potentially displacing 8 million human drivers in the United States alone.
Second, we have a generation of workers who have not reaped the benefits associated with good-paying union jobs due to decades of anti-union activities. As of 2022, union membership has dropped from more than 30% of wage and salary workers in the private sector in the 1950s to just 6.3%. The declining percentage of workers represented by unions is associated with widespread and deep economic inequality, stagnant wages, and a shrinking middle class. Lower union membership rates have contributed to the widening of the pay gap for women and workers of color.
Third, historically disadvantaged groups are overrepresented in nonunionized, low-wage, app-based, and automation-adjacent work. This is due in large part to systemic racism. These structures adversely affect BIPOC workers’ ability to obtain quality education and training, create and pass on generational wealth, or follow through on the steps required to obtain union representation.
Workers face tremendous opposition to unionization efforts from companies that spend hundreds of millions of dollars and use retaliatory actions, disinformation, and other intimidating tactics to stop them from organizing a union. For example, in New York, Black organizer Chris Smalls led the first successful union drive in a U.S. Amazon facility after the company fired him for his activities and made him a target of a smear campaign against the union drive. Smalls’s story is just one illustration of how BIPOC workers are in the middle of the collision between automation and anti-unionization efforts.
The recent surge of support for workers’ rights is a promising development, but BIPOC workers face challenges that extend beyond anti-union tactics. Employer retaliation is also a concern. Workers targeted for retaliation suffer from reduced hours or even job loss. For instance, a survey conducted at the beginning of the COVID-19 pandemic revealed that one in eight workers perceived possible retaliatory actions by their employers against colleagues who raised health and safety concerns. Furthermore, Black workers were more than twice as likely as white workers to experience such possible retaliation. This sobering statistic is a stark reminder of the added layers of discrimination and economic insecurity that BIPOC workers have to navigate when advocating for better working conditions and wages.
The time to enact strong policy supporting historically disadvantaged workers is now. Advancing racial equity and racial justice is a focus for the Biden-Harris Administration, and the political and social will is evident. The day one Biden-Harris Administration Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government seeks to develop policies designed to advance equity for all, including people of color and others who have been historically underinvested in, marginalized, and adversely affected by persistent poverty and inequality. Additionally, the establishment of the White House is a significant development. Led by Vice-President Kamala Harris and Secretary of Labor Marty Walsh, the Task Force aims to empower workers to organize and negotiate with their employers through federal government policies, programs, and practices.
A key focus for the Task Force is to increase worker power in underserved communities by examining and addressing the challenges faced by workers in jurisdictions with restrictive labor laws, marginalized workers, and workers in certain industries. The Task Force is well-timed, given the increased support for workers’ rights demonstrated through the record-high number of petitions filed with the National Labor Relations Board and the rise in strikes over the past two years. The Task Force’s approach to empowering workers and supporting their ability to organize and negotiate through federal government policies and programs offers a promising opportunity to address the unique challenges faced by BIPOC workers in unionization efforts.
The National Bargaining in Good Faith Fund is a critical initiative that can help level the playing field by providing financial assistance to workers facing opposition from employers who refuse to engage in good-faith bargaining, thereby expanding access to unions for Black, Indigenous, and other people of color. In addition, the proposed initiative would reinforce Equal Employment Opportunity Commission (EEOC) and National Labor Relations Board (NLRB) policies regarding employer discrimination and retaliation. The Bargaining in Good Faith Fund will provide direct payments to workers whose employers have retaliated against them for engaging in union organizing activities. The initiative also includes monitoring cases where a violation has occurred against workers involved in union organization and connecting their bargaining unit with relevant resources to support their efforts. With the backing of the Task Force, the fund could make a significant difference in the lives of workers facing barriers to organizing.
Plan of Action
While the adoption of a policy like the Bargaining in Good Faith Fund is unprecedented at the federal level, we draw inspiration from successful state-level initiatives aimed at improving worker well-being. Two notable examples are initiatives enacted in California and New York, where state lawmakers provided temporary monetary assistance to workers affected by the COVID-19 pandemic. Taking a cue from these successful programs, we can develop federal policies that better support workers, especially those belonging to historically disadvantaged groups.
The successful implementation of worker-led, union-organized, and community-led strike assistance funds, as well as similar initiatives for low-wage, app-based, and automation-adjacent workers, indicates that the Bargaining in Good Faith Fund has strong potential for success. For example, the Coworker Solidarity Fund provides legal, financial, and strategic support for worker-activists organizing to improve their companies, while the fund invests in ecosystems that increase worker power and improve economic livelihoods and social conditions across the U.S. South.
New York state lawmakers have also set a precedent with their transformative Excluded Workers Fund, which provided direct financial support to workers left out of pandemic relief programs. The $2.1 billion Excluded Workers Fund, passed by the New York state legislature and governor in April 2021, was the first large-scale program of its kind in the country. By examining and building on these successes, we can develop federal policies that better support workers across the country.
A national program requires multiple funding methods, and several mechanisms have been identified to establish the National Bargaining in Good Faith Fund. First, existing policy needs to be strengthened, and companies violating labor laws should face financial consequences. The labor law violation tax, which could be a percentage of a company’s profits or revenue, would be directed to the Bargaining in Good Faith Fund. Additionally, penalties could be imposed on companies that engage in retaliatory behavior, and the funds generated could also be directed to the Bargaining in Good Faith Fund. New legislation from Congress is required to enforce existing federal policy.
Second, as natural allies in the fight to safeguard workers’ rights, labor unions should allocate a portion of their dues toward the fund. By pooling their resources, a portion of union dues could be directed to the federal fund.
Third, a portion of the fees paid into the federal unemployment insurance program should be redirected to Bargaining in Good Faith Fund.
Fourth, existing funding for worker protections, currently siloed in agencies, should be reallocated to support the Bargaining in Good Faith Fund more effectively. To qualify for the fund, workers receiving food assistance and/or Temporary Assistance for Needy Families benefits should be automatically eligible once the NLRB and the EEOC recognize the instance of retaliation. Workers who are not eligible could apply directly to the Fund through a state-appointed agency. This targeted approach aims to support those who face significant barriers to accessing resources and protections that safeguard their rights and well-being due to historical labor exploitation and discrimination.
Several federal agencies could collaborate to oversee the Bargaining in Good Faith Fund, including the Department of Labor, the EEOC, the Department of Justice, and the NLRB. These agencies have the authority to safeguard workers’ welfare, enforce federal laws prohibiting employment discrimination, prosecute corporations that engage in criminal retaliation, and enforce workers’ rights to engage in concerted activities for protection, such as organizing a union.
Conclusion
The federal government has had a policy of supporting worker organizing and collective bargaining since the passage of the National Labor Relations Act in 1935. However, the federal government has not fully implemented its policy over the past 86 years, resulting in negative impacts on BIPOC workers, who face systemic racism in the unionization process and on the job. Additionally, rapid technological advances have resulted in the automation of tasks and changes in the labor market that disproportionately affect workers of color. Consequently, the United States is likely to see an increase in wealth inequality over the next two decades.
The Biden-Harris Administration can act now to promote racial equity by establishing a National Bargaining in Good Faith Fund to support historically disadvantaged workers in unionization efforts. Because this is a pressing issue, a feasible short-term solution is to initiate a pilot program over the next 18 months. It is imperative to establish a policy that acknowledges and addresses the historical disadvantage experienced by these workers and supports their efforts to attain economic equity.
For example, in 2019, the city of Evanston, Illinois, established a fund to provide reparations to Black residents who can demonstrate that they or their ancestors have been affected by discrimination in housing, education, and employment. The fund is financed by a three percent tax on the sale of recreational marijuana and is intended to provide financial assistance for housing, education, and other needs.
Another example is the proposed H.R. 40 bill in the U.S. Congress that aims to establish a commission to study and develop proposals for reparations for African Americans who are descendants of slaves and who have been affected by slavery, discrimination, and exclusion from opportunities. The bill aims to study the impacts of slavery and discrimination and develop proposals for reparations that would address the lingering effects of these injustices, including the denial of education, housing, and other benefits.
The Civil Rights Act of 1964, which banned discrimination on the basis of race, color, religion, sex, or national origin, was challenged in court but upheld by the Supreme Court in 1964.
The Voting Rights Act of 1965, which aimed to eliminate barriers to voting for minorities, was challenged in court several times over the years, with the Supreme Court upholding key provisions in 1966 and 2013, but striking down a key provision in 2013.
The Fair Housing Act of 1968, which banned discrimination in housing, was challenged in court and upheld by the Supreme Court in 1968.
The Affirmative Action policies, which aimed to increase the representation of minorities in education and the workforce, have been challenged in court multiple times over the years, with the Supreme Court upholding the use of race as a factor in college admissions in 2016.
Despite court challenges, policymakers must persist in bringing forth solutions to address racial equity as many complex federal policies aimed at promoting racial equity have been challenged in court over the years, not just on constitutional grounds.
Ensuring Racial Equity in Federal Procurement and Use of Artificial Intelligence
In pursuit of lower costs and improved decision-making, federal agencies have begun to adopt artificial intelligence (AI) to assist in government decision-making and public administration. As AI occupies a growing role within the federal government, algorithmic design and evaluation will increasingly become a key site of policy decisions. Yet a 2020 report found that almost half (47%) of all federal agency use of AI was externally sourced, with a third procured from private companies. In order to ensure that agency use of AI tools is legal, effective, and equitable, the Biden-Harris Administration should establish a Federal Artificial Intelligence Program to govern the procurement of algorithmic technology. Additionally, the AI Program should establish a strict data collection protocol around the collection of race data needed to identify and mitigate discrimination in these technologies.
Researchers who study and conduct algorithmic audits highlight the importance of race data for effective anti-discrimination interventions, the challenges of category misalignment between data sources, and the need for policy interventions to ensure accessible and high-quality data for audit purposes. However, inconsistencies in the collection and reporting of race data significantly limit the extent to which the government can identify and address racial discrimination in technical systems. Moreover, given significant flexibility in how their products are presented during the procurement process, technology companies can manipulate race categories in order to obscure discriminatory practices.
To ensure that the AI Program can evaluate any inequities at the point of procurement, the Office of Science and Technology Policy (OSTP) National Science and Technology Council Subcommittee on Equitable Data should establish guidelines and best practices for the collection and reporting of race data. In particular, the Subcommittee should produce a report that identifies the minimum level of data private companies should be required to collect and in what format they should report such data during the procurement process. These guidelines will facilitate the enforcement of existing anti-discrimination laws and help the Biden-Harris Administration pursue their stated racial equity agenda. Furthermore, these guidelines can help to establish best practices for algorithm development and evaluation in the private sector. As technology plays an increasingly important role in public life and government administration, it is essential not only that government agencies are able to access race data for the purposes of anti-discrimination enforcement—but also that the race categories within this data are not determined on the basis of how favorable they are to the private companies responsible for their collection.
Challenge and Opportunity
Research suggests that governments often have little information about key design choices in the creation and implementation of the algorithmic technologies they procure. Often, these choices are not documented or are recorded by contractors but never provided to government clients during the procurement process. Existing regulation provides specific requirements for the procurement of information technology, for example, security and privacy risks, but these requirements do not account for the specific risks of AI—such as its propensity to encode structural biases. Under the Federal Acquisition Regulation, agencies can only evaluate vendor proposals based on the criteria specified in the associated solicitation. Therefore, written guidance is needed to ensure that these criteria include sufficient information to assess the fairness of AI systems acquired during procurement.
The Office of Management and Budget (OMB) defines minimum standards for collecting race and ethnicity data in federal reporting. Racial and ethnic categories are separated into two questions with five minimum categories for race data (American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, and White) and one minimum category for ethnicity data (Hispanic or Latino). Despite these standards, guidelines for the use of racial categories vary across federal agencies and even across specific programs. For example, the Census Bureau classification scheme includes a “Some Other Race” option not used in other agencies’ data collection practices. Moreover, guidelines for collection and reporting of data are not always aligned. For example, the U.S. Department of Education recommends collecting race and ethnicity data separately without a “two or more races” category and allowing respondents to select all race categories that apply. However, during reporting, any individual who is ethnically Hispanic or Latino is reported as only Hispanic or Latino and not any other race. Meanwhile, any respondent who selected multiple race options is reported in a “two or more races” category rather than in any racial group with which they identified.
These inconsistencies are exacerbated in the private sector, where companies are not uniformly constrained by the same OMB standards but rather covered by piecemeal legislation. In the employment context, private companies are required to collect and report on demographic details of their workforce according to the OMB minimum standards. In the consumer lending setting, on the other hand, lenders are typically not allowed to collect data about protected classes such as race and gender. In cases where protected class data can be collected, these data are typically considered privileged information and cannot be accessed by the government. In the case of algorithmic technologies, companies are often able to discriminate on the basis of race without ever explicitly collecting race data by using features or sets of features that act as proxies for protected classes. Facebook’s advertising algorithms, for instance, can be used to target race and ethnicity without access to race data.
Federal leadership can help create consistency in reporting to ensure that the government has sufficient information to evaluate whether privately developed AI is functioning as intended and working equitably. By reducing information asymmetries between private companies and agencies during the procurement process, new standards will bring policymakers back into the algorithmic governance process. This will ensure that democratic and technocratic norms of agency rule-making are respected even as privately developed algorithms take on a growing role in public administration.
Additionally, by establishing a program to oversee the procurement of artificial intelligence, the federal government can ensure that agencies have access to the necessary technical expertise to evaluate complex algorithmic systems. This expertise is crucial not only during the procurement stage but also—given the adaptable nature of AI—for ongoing oversight of algorithmic technologies used within government.
Plan of Action
Recommendation 1. Establish a Federal Artificial Intelligence Program to oversee agency procurement of algorithmic technologies.
The Biden-Harris Administration should create a Federal AI Program to create standards for information disclosure and enable evaluation of AI during the procurement process. Following the two-part test outlined in the AI Bill of Rights, the proposed Federal AI Program would oversee the procurement of any “(1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”
The goals of this program will be to (1) establish and enforce quality standards for AI used in government, (2) enforce rigorous equity standards for AI used in government, (3) establish transparency practices that enable public participation and political accountability, and (4) provide guidelines for AI program development in the private sector.
Recommendation 2. Produce a report to establish what data are needed in order to evaluate the equity of algorithmic technologies during procurement.
To support the AI Program’s operations, the OSTP National Science and Technology Council Subcommittee on Equitable Data should produce a report to establish guidelines for the collection and reporting of race data that balances three goals: (1) high-quality data for enforcing existing anti-discrimination law, (2) consistency in race categories to reduce administrative burdens and curb possible manipulation, and (3) prioritizing the needs of groups most affected by discrimination. The report should include opportunities and recommendations for integrating its findings into policy. To ensure the recommendations and standards are instituted, the President should direct the General Services Administration (GSA) or OMB to issue guidance and request that agencies document how they will ensure new standards are integrated into future procurement vehicles. The report could also suggest opportunities to update or amend the Federal Acquisition Regulations.
High-Quality Data
The new guidelines should make efforts to ensure the reliability of race data furnished during the procurement process. In particular:
- Self-identification should be used whenever possible to ascertain race. As of 2021, Food and Nutrition Service guidance recommends against the use of visual identification based on reliability, respect for respondents’ dignity, and feedback from Child and Adult Care Food Program) and Summer Food Service Program participants.
- The new guidelines should attempt to reduce missing data. People may be reluctant to share race information for many legitimate reasons, including uncertainty about how personal data will be used, fear of discrimination, and not identifying with predefined race categories. These concerns can severely impact data quality and should be addressed to the extent possible in the OMB guidelines. New York’s state health insurance marketplace saw a 20% increase in response rate for race by making several changes to the way they collect data. These changes included explaining how the data would be used and not allowing respondents to leave the question blank but instead allowing them to select “choose not to answer” or “don’t know.” Similarly, the Census Bureau found that a single combined race and ethnicity question improved data quality and consistency by reducing the rate of “some other race,” missing, and invalid responses as compared with two separate questions (one for race and one for ethnicity).
- The new guidelines should follow best practices established through rigorous research and feedback from a variety of stakeholders. In June 2022, the OMB announced a formal review process to revise Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity. While this review process is intended for the revision of federal data requirements, its findings can help inform best practices for collection and reporting requirements for nongovernmental data as well.
Consistency in Data Reporting
Whenever possible and contextually appropriate, the guidelines for data reporting should align with the OMB guidelines for federal data reporting to reduce administrative burdens. However, the report may find that other data is needed that goes beyond the OMB guidelines for the evaluation of privately developed AI.
Prioritizing the Needs of Affected Groups
In their Toolkit for Centering Racial Equity Throughout Data Integration, the Actionable Intelligence for Social Policy group at the University of Pennsylvania identifies best practices for ensuring that data collection serves the groups most affected by discrimination. In particular, this toolkit emphasizes the need for strong privacy protections and stakeholder engagement. In their final report, the Subcommittee on Equitable Data should establish protocols to secure data and for carefully considered role-based access to it.
The final report should also engage community stakeholders in determining which data should be collected and establish a plan for ongoing review that engages with relevant stakeholders, prioritizing affected populations and racial equity advocacy groups. The report should evaluate the appropriate level of transparency in the AI procurement process, in particular, trade-offs between desired levels of transparency and privacy.
Conclusion
Under existing procurement law, the government cannot outsource “inherently governmental functions.” Yet key policy decisions are embedded within the design and implementation of algorithmic technology. Consequently, it is important that policymakers have the necessary resources and information throughout the acquisition and use of procured AI tools. A Federal Artificial Intelligence Program would provide expertise and authority within the federal government to assess these decisions during procurement and to monitor the use of AI in government. In particular, this would strengthen the Biden-Harris Administration’s ongoing efforts to advance racial equity. The proposed program can build on both long-standing and ongoing work within the federal government to develop best practices for data collection and reporting. These best practices will not only ensure that the public use of algorithms is governed by strong equity and transparency standards in the public sector but also provide a powerful avenue for shaping the development of AI in the private sector.
Algorithmic Transparency Requirements for Lending Platforms Using Automated Decision Systems
Now is the time to ensure lending models offered by private companies are fair and transparent. Access to affordable credit greatly impacts quality of life and can potentially impact housing choice. Over the past decade, algorithmic decision-making has increasingly impacted the lives of American consumers. But it is important to ensure all forms of algorithmic underwriting are open to review for fairness and transparency, as inequities may appear in either access to funding or in credit terms. A recent report released by the U.S. Treasury Department speaks to the need for more oversight in the FinTech market.
Challenge and Opportunity
The financial services sector, a historically non-technical industry, has recently and widely adopted automated platforms. Financial technology, known as “FinTech”, offers financial products and services directly to consumers by private companies or in partnership with banks and credit unions. These platforms use algorithms that are non-transparent but directly affect Americans’ ability to obtain affordable financing. Financial institutions (FIs) and mortgage brokers use predictive analytics and artificial intelligence to evaluate candidates for mortgage products, small business loans, and unsecured consumer products. Some lenders underwrite personal loans such as auto loans, personal unsecured loans, credit cards, and lines of credit with artificial intelligence. Although loans that are not government-securitized receive less scrutiny, access to credit for personal purposes impacts the debt-to-income ratios and credit scores necessary to qualify for homeownership or the global cash flow of a small business owner. Historic Home Mortgage Disclosure Act (HMDA) data and studies on small business lending demonstrate that disparate access to mortgages and small business loans occurs. This scenario will not be improved through unaudited decision automation variables, which can create feedback loops that hold the potential to scale inequities.
Forms of discrimination appear in credit approval software and can hinder access to housing. Lorena Rodriguez writes extensively about the current effect of technology on lending laws regulated by the Fair Housing Act of 1968, pointing out that algorithms have incorporated alternative credit scoring models into their decision trees. These newly selected variables have no place in determining someone’s creditworthiness. Inputs include factors like social media activity, retail spending activity, bank account balances, college of attendance, or retail spending habits.
Traditional credit scoring models, although cumbersome, are understandable to the typical consumer who takes the time to understand how to impact their credit score. However, unlike credit scoring models, lending platforms can input a data variable with no requirement to disclose the models that impact decisioning. In other words, a consumer may never understand why their loan was approved or denied, because models are not disclosed. At the same time, it may be unclear which consumers are being solicited for financing opportunities, and lenders may target financially vulnerable consumers for profitable but predatory loans.
Transparency around lending decision models is more necessary now than ever. The COVID-19 pandemic created financial hardship for millions of Americans. The Federal Reserve Bank of New York recently reported all-time highs in American household debt. In a rising interest rate environment, affordable and fair credit access will become even more critical to help households stabilize. Although artificial intelligence has been in use for decades, the general public is only recently beginning to realize the ethical impacts of its uses on daily life. Researchers have noted algorithmic decision-making has bias baked in, which has the potential to exacerbate racial wealth gaps and resegregate communities by race and class. While various agencies—such as the Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC), Financial Crimes Enforcement Network, Securities and Exchange Commission, and state regulators—have some level of authority over FinTech companies, there are oversight gaps. Although FinTechs are subject to fair lending laws, not enough is known about disparate impact or treatment, and regulation of digital financial service providers is still evolving. Modernization of policy and regulation is necessary to keep up with the current digital environment, but new legislation can address gaps in the market that existing policies may not cover.
Plan of Action
Three principles should guide policy implementation around FinTech: (1) research, (2) enforcement, (3) incentives. These principles balance oversight and transparency while encouraging responsible innovation by community development financial institutions (CDFIs) and charitable lenders that may lead to greater access to affordable credit. Interagency cooperation and the development of a new oversight body is critical because FinTech introduces complexity due to technical, trade, and financial services overlap.
Recommendation 1: Research. The FTC should commission a comprehensive, independent research study to understand the scope and impact of disparate treatment in FinTech lending.
To ensure equity, the study should be jointly conducted by a minimum of six research universities, of which at least two must be Historically Black Colleges and Universities, and should be designed to understand the scope and impact of fintech lending. A $3.5 million appropriation will ensure a well-designed, multiyear study. A strong understanding of the landscape of FinTech and its potential for disparate impact is necessary. Many consumers are not adequately equipped to articulate their challenges, except through complaints to agencies such as the Office of the Comptroller of Currency (OCC) and the CFPB. Even in these cases, the burden of responsibility is on the individual to be aware of channels of appeal. Anecdotal evidence suggests BIPOC borrowers and low-to-moderate income (LMI) consumers may be the target of predatory loans. For example, an LMI zip code may be targeted with FinTech ads, while product terms may be at a higher interest rate. Feedback loops in algorithms will continue to identify marginalized communities as higher risk. A consumer with lesser means who also receives a comparative triple-interest rate will remain financially vulnerable due to extractive conditions.
Recommendation 2: Enforcement. A suite of enforcement mechanisms should be implemented.
- FinTechs engaged in mortgage lending should be subject to Home Mortgage Disclosure Act (HMDA) reporting on lending activity and Community Reinvestment Act (CRA) examination. When a bank utilizes a FinTech1, a vendor CRA assessment should be incorporated into the bank’s own examination process. Credit unions should also be required to produce FinTech vendor CRA exams during their examination process. CRA and HMDA requirements would encourage FinTech to make sure they are lending broadly.
- Congress should codify’ FinTechs’ role as the “true lender” whenever a FinTech’s underwriting model is used by an FI partner to clarify FinTech responsibility to all applicable state, local, and federal interest caps, fair lending laws, etc., as well as liability when they do not meet existing standards. Federal regulatory agency guidelines must also be updated to clarify the bank or credit union’s Fintech partner’s shared responsibility when a FinTech model for underwriting violates UDAAP or fair lending guidelines.
- A previously proposed OCC FinTech charter should be adopted but made optional. However, when a FinTech chooses to adopt the OCC charter, the charter should give FinTechs interstate privileges covered under the Reigle-Neal Interstate Banking and Branch Efficiency Act of 1994. This provision should also require FinTechs to fulfill state licensing requirements in each state in which they operate, eliminating their current ability to bypass licensing by partnering with regulated FIs.
- Companies engaged in any financing activity or providing automated lending software to regulated FIs must be required to disclose decision models to the FI’s examiner upon request. FinTech data disclosure must not be limited to federally secured loans such as small business or mortgage loans but include secured and unsecured loan products made to consumers such as auto, personal, and small dollar loans. When consumers obtain a predatory product in these categories, the loans can have a severe impact on debt-to-income/back-end ratios and credit scores of borrowers, preventing them from obtaining homeownership or causing them to receive less favorable terms.
Recommendation 3: Incentives. Develop an ethical FinTech certification that denotes a FinTech as responsible lender, such as modeled by the U.S. Treasury’s CDFI certification.
The certification can sit with the U.S. Treasury and should create incentives for FinTechs demonstrated to be responsible lenders in forms such as grant funding, procurement opportunities, or tax credits. To create this certification, FI regulatory agencies, with input from the FTC and National Telecommunications and Information Administration, should jointly develop an interagency menu of guidelines that dictate acceptable parameters for what criteria may be input into an automated decision model for consumer lending. Guidelines should also dictate what may not be used in a lending model (example: college of attendance). Exceptions to guidelines must be documented, reviewed, and approved by the oversight body after being determined to be a legitimate business necessity.
Conclusion
Now is the time to provide policy guidance that will prevent disparate impact and harm to minority, BIPOC, and other traditionally marginalized communities as a result of algorithmically informed biased lending practices.
Yes, but the CFPB’s general authority to do so is regularly challenged as a result of its independent structure. It is not clear if its authority extends to all forms of algorithmic harm, as its stated authority to regulate FinTech consumer lending is limited to mortgage and payday lending. UDAAP oversight is also less clear, as it pertains to nonregulated lenders. Additionally, the CFPB has the authority to regulate institutions over $10 billion. Many FinTechs operate below this threshold, leaving oversight gaps. Fair lending guidance through financial technology must be codified apart from the CFPB, although some oversight may continue to rest with the CFPB.
Precedent is currently being set for regulation of small business lending data through the CFPB’s enforcement of Section 1071 of the Dodd-Frank Act. Regulation will require financial disclosure of small business lending data. Other government programs, such as the CDFI fund, currently require transaction-level reporting for lending data attached to federal funding. Over time, private company vendors are likely to develop tools to support reporting requirements around lending. Data collection can also be incentivized through mechanisms like certifications or tax credits for responsible lenders that are willing to submit data.
The OCC has proposed a charter for FinTechs that would subject them to regulatory oversight (see policy recommendation). Other FI regulators have adopted various versions of FinTech oversight. Oversight for FinTech-insured depository partnerships should remain with a primary regulatory authority for the depository with support from overarching interagency guidance.
A new regulatory body with enforcement authority and congressional appropriations would be ideal, since FinTech is a unique form of lending that touches issues that impact consumer lending, regulation of private business, and data privacy and security.
This argument is often used by payday lenders that offer products with egregious, predatory interest rates. Not all forms of access to credit are responsible forms of credit. Unless a FinTech operates as a charitable lender, its goal is profit maximization—which does not align well with consumer protection. In fact, research indicates financial inclusion promises in FinTech fall short.
Many private lenders are regulated: Payday lenders are regulated by the CFPB once they reach a certain threshold. Pawn shops and mortgage brokers are subject to state departments for financial regulation. FinTechs also have the unique potential to have a different degree of harm because their techniques of automation and algorithmic evaluation allow for scalability and can create reinforcing feedback loops of disparate impact.
Creating Equitable Outcomes from Government Services through Radical Participation
Government policies, products, and services are created without the true and full design participation and expertise of the people who will use them–the public: citizens, refugees, and immigrants. As a result, the government often replicates private sector anti-patterns1, using or producing oppressive, disempowering, and colonial policies through products and services that embody bias, limit access, create real harm, and discriminate against underutilized communities2 on the basis of various identities violating the President’s Executive Order on Equity. Examples include life-altering police use of racially and sexually biased facial recognition products, racial discrimination in the delivery access of life-saving Medicaid services and SNAP benefits, and racist child welfare service systems.
The Biden-Harris Administration should issue an executive order to embed Radical Participatory Design (RPD) into the design and development of all government policies, products, and services, and to require all federally-funded research to use Radical Participatory Research (RPR). Using an RPD and RPR approach makes the Executive Order on Racial Equity, Executive Order on Transforming the Customer Experience, and the Executive Order on DEIA more likely to succeed. Using RPD and RPR as the implementation strategy is an opportunity to create equitable social outcomes by embodying equity on the policy, product and service design side (Executive Order on Racial Equity), to improve the public’s customer experience of the government (Executive Order on Transforming the Customer Experience, President’s Management Agenda Priority 2), and to lead to a new and more just, equitable, diverse, accessible, and inclusive (JEDAI) future of work for the federal government (Executive Order on DEIA).
Challenge and Opportunity
The technology industry is disproportionately white and male. Compared to private industry overall, whites, men, and Asians are overrepresented while Latinx people, Black people, and women are underrepresented. Only 26% of technology positions in the U.S. are held by women though they represent 57% of the US workforce. Even worse, women of color hold 4% of technology positions even though they are 16% of the population. Similarly, Black Americans are 14% of the population but hold 7% of tech jobs. Latinx Americans only hold 8% of tech jobs while comprising 19% of the population. This representation decreases even more as you look at leadership roles in technology. In FY2020, the federal government spent $392.1 billion contracting services, including services to build products. Latinx, African Americans, Native Americans, and women are underrepresented in the contractor community.
The lack of diversity in designers and developers of the policies, products, and services we use leads to harmful effects like algorithmic bias, automatic bathroom water and soap dispensers that do not recognize darker skin, and racial bias in facial recognition (mis)identification of Black and Brown people.
With a greater expectation of equity from government services, the public experiences greater disappointment when government policies, services, and products are biased, discriminatory, or harmful. Examples include inequitable public school funding services, race and poverty bias in child welfare systems, and discriminatory algorithmic hiring systems used in government.
The federal government has tried to improve the experience of its products and services through methodologies like Human-centered Design (HCD). In HCD, the design process is centered on the community who will use the design, by first conducting research interviews or observations. Beyond the research interactions with community members, designers are supposed to carry empathy for the community all the way through the design, development, and launch process. Unfortunately, given the aforementioned negative outcomes of government products and services for various communities, empathy often is absent. What empathy may be generated does not persist long enough to influence or impact the design process. Ultimately, individual appeals to empathy are inadequate at generating systems level change. Scientific studies show that white people, who make up the majority of technologists and policy-makers, have a reduced capacity for empathy for people of other and similar backgrounds. As a result, the push for equity remains in government services, products, and policies, leading to President Biden’s Executive Order on Advancing Racial Equity and Support for Underserved Communities and, still, again, with the Executive Order on Further Advancing Racial Equity and Support for Underserved Communities.
The federal government lacks processes to embed empathy throughout the lifecycle of policy, product, and service design, reflecting the needs of community groups. Instead of trying to build empathy in designers who have no experiential knowledge, we can create empathetic processes and organizations by embedding lived experience on the team.
Radical Participatory Design (RPD) is an approach to design in which the community members, for whom one is designing, are full-fledged members on the research, design, and development team. In traditional participatory design, designers engage the community at certain times and otherwise work, plan, analyze, or prepare alone before and after those engagements. In RPD, the community members are always there because they are on the team; there are no meetings, phone calls, or planning without them.
RPD has a few important characteristics. First, the community members are always present and leading the process. Second, the community members outnumber the professional designers, researchers, or developers. Third, the community members own the artifacts, outcomes, and narratives around the outcomes of the design process. Fourth, community members are compensated equitably as they are doing the same work as professional designers. Fifth, RPD teams are composed of a qualitatively representative sample (including all the different categories and types of people) of the community.
Embedding RPD in the government connects the government to a larger movement toward participatory democracy. Examples include the Philadelphia Participatory Design Lab, the Participatory City Making Lab, the Center for Lived Experience, the Urban Institute’s participatory Resident Researchers, and Health and Human Service’s “Methods and Emerging Strategies to Engage People with Lived Experience.” Numerous case studies show the power of participatory design to reduce harm and improve design outcomes. RPD can maximize this by infusing equity as people with lived experience both choose, check, and direct the process.
As the adoption of RPD increases across the federal government, the prevalence and incidence of harm, bias, trauma, and discrimination in government products and services will decrease, aiding the implementation of the executive orders on Advancing Racial Equity and Support for Underserved Communities and Further Advancing Racial Equity and Support for Underserved Communities, and ensuring the OSTP AI Bill of Rights for AI products and services. Additionally, RPR aligns with OSTP’s actions to advance open and equitable research. Second, the reduction of harm, discrimination, and trauma improves the customer experience (CX) of government services aiding the implementation of the Executive Order on Transforming the Customer Experience, the President’s Management Agenda Priority 2, and the CAP goal on Customer Experience. An improved CX will increase community adoption, use, and engagement with potentially helpful and life-supporting government services that underutilized people need. RPD highlights the important connection between equity and CX and creates a way to link the two executive orders. You cannot claim excellent CX when the CX is inequitable and entire underutilized segments of the public have a harmful experience.
Third, instead of seeking the intersection of business needs and user needs like in the private sector, RPD will move the country closer to its democratic ideals by equitably aligning the needs of the people with the needs of the government of the people, by the people, and for the people. There are various examples where the government acts like a separate entity completely unaligned to the will of a majority of the public (gun control, abortion). Project by project, RPD helps align the needs of the people and the needs of the government of the people when representative democracy does not function properly.
Fourth, all community members, from all walks of life, brought into government to do participatory research and design will gain or refine skills they can then use to stay in government policy, product, and service design or to get a job outside of government. The workforce outcomes of RPD further diversify policy, product, and service designers and researchers both inside and outside the federal government, aligning with the Executive Order on DEIA in the Federal Workforce.
Plan of Action
The use of RPD and RPR in government is the future of participatory government and a step towards truly embodying a government of the people. RPD must work at the policy level as well, as policy directs the creation of services, products, and research. Equitable product and service design cannot overcome inequitable and discriminatory policy. The following recommended actions are initial steps to embody participatory government in three areas: policy design, the design and development of products and services, and funded research. Because all three areas occur across the federal government, executive action from the White House will facilitate the adoption of RPD.
Policy Design
An executive order from the president should direct agencies to use RPD when designing agency policy. The order should establish a new Radical Participatory Policy Design Lab (RPPDL) for each agency with the following characteristics:
- Embodies a qualitatively representative sample of the public target audience impacted by the agency
- Includes a qualitatively representative sample of agency employees who are also impacted by agency policy
- Designs policy through this radical participatory design team
- Sets budget policy through participatory budgeting (Grand Rapids, NYC, Durham, and HUD examples)
- Assesses agency programs that affect the public through participatory appraisals and participatory evaluations
- Rotates community policy designers in to the lab and out of the lab on six-month renewable terms
- Community policy designers receive equitable compensation for their time
- Community policy designers can be offered jobs to stay in government based on their policy experience, or the office that houses the RPPDL will assist community policy designers in finding roles outside of government based on their experience and desire
- An RPD authorization to allow government policy employees to compensate community members outside of a grant, cooperative agreement, or contract (GSA TTS example)
The executive order should also create a Chief Experience Officer (CXO) for the U.S. as a White House role. The Office of the CXO (OCXO) would coordinate CX work across the government in accordance with the Executive Order on Transforming the CX, the PMA Priority 2, the CX CAP goal, and the OMB Circular A-11 280. The executive order would focus the OCXO on the work of coordinating, approving, advising the RPD work across the federal government, including the following initiatives:
- Improve the public experience of high-impact, trans-agency journeys by managing the Office of Management and Budget (OMB) life experience projects
- Facilitate a CXO council of all federal agency CXOs.
- Advise various agency CXOs and other officials on embedding RPD into their policy, service, and product design and development work.
- Work with agencies to recruit and create a list of civil society organizations who are willing to help recruit community members for RPD and RPR projects.
- Recruit RPD public team members and coordinate the use of RPD in the creation of White House policy.
- Coordinate with the director of OMB and the Equitable Data Working Group to create
- an equity measure of the social outcomes of the government’s products, services, and policies,
- a public CX measurement of the entire federal government.
- Serve as a member of the White House Steering Committee on Equity established by the Executive Order on Further Advancing Equity.
- Serve as a member of the Equitable Data Working Group established by the Executive Order on Advancing Racial Equity.
- Strategically direct the work of the OCXO in order to improve the equity and CX metrics.
- Embed equity measures in CX measurement and data reporting template required by the OMB Circular A-11 280. CX success requires healthy, equitable CX across various subgroups, including underutilized communities, connecting the Executive Order on Transforming the CX to the Executive Order on Advancing Racial Equity.
- Update the OMB Circular A-11 280’s CX Capacity Assessment tool and the Action Plan template to include equity as a central component.
- Evaluate and assess the utilization of RPD in policy, product, and service design by agencies across the government.
Due to the distributed nature of the work, the funding for the various RPPDLs and the OCXO should come from money the director of OMB has identified and added to the budget the President submits to Congress, according to Section 6 of the Executive Order on Advancing Racial Equity. Agencies should also utilize money appropriated by the Agency Equity Teams required by the Executive Order on Further Advancing Racial Equity.
Product and Service Design
The executive order should mandate that all research, design, and delivery of agency products and services for the public be done through RPR and RPD. RPD should be used both for in-house and contracted work through grants, contracts, or cooperative agreements.
On in-house projects, funding for the RPD team should come from the project budget. For grants, contracts, and cooperative agreements, funding for the RPD team should come from the acquisition budget. As a result, the labor costs will increase since there are more designers on the project. The non-labor component of the project budget will be less. A slightly lower non-labor project budget is worth the outcome of improved equity. Agency offices can compensate for this by requesting a slightly higher project budget for in-house or contracted design and development services.
In support of the Executive Order on Transforming the CX, the PMA Priority 2, and the CX CAP goal, OMB should amend the OMB Circular A-11 280 to direct High Impact Service Providers (HISPs) to utilize RPD in their service work.
- HISPS must embed RPD in their product and service research, design, development, and delivery.
- HIPSs must include an equity component in their CX Capacity Assessment and CX Action Plan in line with guidance from the CXO of the U.S.
- Following applicable laws, HISPs should let customers volunteer demographic information during customer experience data collection in order to assess the CX of various subgroups.
- Agency annual plans should include both CX and equity indicator goals.
- Equity assessment data and CX data for various subgroups and underutilized communities must be reported in the OMB-mandated data dashboard.
- Each agency should create an RPD authorization to allow government employees and in-house design teams to compensate community members outside of a grant, cooperative agreement, or contract (GSA TTS example).
OSTP should add RPD and RPD case studies as example practices in OSTP’s AI Bill of Rights. RPD should be listed as a practice that can affect and reinforce all five principles.
Funded Research
The executive order should also mandate that all government-funded, use-inspired research about communities or intended to be used by people or communities, should be done through RPR. In order to determine if a particular intended research project is use-inspired, the following questions should be asked by the government funding agency prior to soliciting researchers:
- For technology research, is the technology readiness level (TRL) 2 or higher?
- Is the research about people or communities?
- Is the research intended to be used by people or communities?
- Is the research intended to create, design, or guide something that will be used by people and communities?
If the answer to any of the questions is yes, the funding agency should require the funded researchers to use an RPR approach.
Funding for the RPR team comes from the research grant or funding. Researchers can use the RPR requirement to estimate how much funding should be requested in the proposal.
OSTP should add RPR and the executive order to their list of actions to advance open and equitable research. RPR should be listed as a key initiative of the year of Open Science.
Conclusion
In order to address inequity, the public’s lived experience should lead the design and development process of government products and services. Because many of those products and services are created to comply with government policy, we also need lived experience to guide the design of government policy. Embedding Radical Participatory Design in government-funded research as well as policy, products, and services reduces harm, creates equity, and improves the public customer experience. Additionally, RPD connects and embeds equity in CX, moves us toward our democratic ideals, and creatively addresses the future of work by diversifying our policy, product, and service design workforce.
Because we do not physically hold digital products, the line between a software product and a software service is thin. Usually, a product is an offering or part of an offering that involves one interaction or touchpoint with a customer. In contrast, a service involves multiple touchpoints both online and offline, or multiple interactions both digital and non-digital.
For example, Google Calendar can be considered a digital product. A product designer for Google Calendar might work on designing its software interface, colors, options, and flows. However, a library is a service. As a library user, you might look for a book on the library website. If you can’t find it, you might call the library. The librarian might ask you to come in. You go in and work with the librarian to find the book. After realizing it is not there, the librarian might then use a software tool to request a new book purchase. Thus, the library service involved multiple touchpoints, both online and offline: a website, a phone line, an in-person service in the physical library, and an online book procurement tool.
Most of the federal government’s offerings are services. Examples like Medicare, Social Security, and veterans benefits involve digital products, in-person services in a physical building, paper forms, phone lines, email services, etc. A service designer designs the service and the mechanics behind the service in order to improve both the customer experience and the employee experience across all touchpoints, offline and online, across all interactions, digital and non-digital.
Participatory design (PD) has many interpretations. Sometimes PD simply means interviewing research participants. Because they are “participants,” by being interviewees, the work is participatory. Sometimes, PD means a specific activity or method that is participatory. Sometimes practitioners use PD to mean a way of doing an activity. For example, we can do a design studio session with just designers, or we can invite some community members to take part for a 90-minute session. PD can also be used to indicate a methodology. A methodology is a collection of methods or activities; or a methodology is a philosophy or guiding philosophy or principles that help you choose a particular method or activity at a particular point in time or in a process.
In all the above ways of interpreting PD, there are times when the community is present and times when they are not. Moreover, the community members are never leading the process.
Radical comes from the Latin word “radix” meaning root. RPD means design in which the community participates “to the root,” fully, completely, from beginning to end. There are no times, planning, meetings, or phone calls where the community is not present because the community is the team.
Peer review is similar to an Institutional Review Board (IRB). A participatory version of this could be called a Community Review Board (CRB). The difficulty is that a CRB can only reject a research plan; a CRB does not create the proposed research plans. Because a CRB does not ensure that great research plans are created and proposed, it can only reduce harm. It cannot create good.
Equality means treating people the same. Equity means treating people differently to achieve equal outcomes. CRBs achieve equality only in approving power, by equally including community members in the approving process. CRBs fail to achieve equity in social outcomes of products and services because community members are missing in the research plan creation process, research plan implementation process, and the development process of policy, products, and services where inequity can enter. To achieve equal outcomes, equity, their lived experiential knowledge is needed throughout the entire process and especially in deciding what to propose to a CRB.
Still a CRB can be a preliminary step before RPR. Unfortunately, IRBs are only required for US government-funded research with human subjects. In practice, it is not interpreted to apply to the approval of design research for policy, products, and services, even when the research usually includes human subjects. The application of participatory CRBs to approve all research–including design research for policy, products, and services–can be an initial step or a pilot.
A good analogy is that of cooking. It is quite helpful for everyone to know how to cook. Most of us cook in some capacity. Yet, there are people who attend culinary school and become chefs or cooks. Has the fact that individual people can and do cook eliminated the need for chefs? No. Chefs and cooks are useful for various situations – eating at a restaurant, catering an event, the creation of cookbooks, lessons, etc.
The main idea is that the chefs have mainstream institutional knowledge learned from books and universities or cooking schools. But that is not the only type of knowledge. There is also lived, experiential knowledge as well as community, embodied, relational, energetic, intuitive, aesthetic, and spiritual knowledge. It is common to meet amazing chefs who have never been to a culinary school but simply learned to cook through lived experience of experimentation and having to cook everyday for X people. Some learned to cook through relational and community knowledge passed down in their culture through parents, mothers, and aunties. Sometimes, famous chefs will go and learn the knowledge of a particular culture from people who did not go to a learning school. The chefs will appropriate the knowledge and then create a cookbook to sell marketing a fusion cuisine, infused with the culture whose culinary knowledge they appropriated.
Similarly, everyone designs. It is not enough to be tech-savvy or an innovation and design expert. The most important knowledge to have is the lived experiential, community, relational, and embodied knowledge of the people for whom we are designing. When lived experience leads, the outcomes are amazing. Putting lived experience alongside professional designers can be powerful as well. Professional designers are still needed, as their knowledge can help improve the design process. Professionals just cannot lead, lead alone, or be the only knowledge base because inequity enters the system more easily.
To realize the ambitions of this policy proposal, full-time teams will be needed. The RPPDLs who are designing policy are full-time roles due to the amount and various levels of policy to design. For products and services, however, some RPD teams may be part-time. For example, improving an existing product or service may be one of many work projects a government team is conducting. So if they are only working on the project 50% of the time, they may only require a group of part-time community members. On the other hand, the work may require full-time work for RPD team members for the design and development of a greenfield or new product or service that does not exist. Full-time projects will need full-time community members. For part-time projects, community members can work on multiple projects to reach full-time capacity.
Team members can receive non-monetary compensation like a gift card, wellness services, or child care. However, it is best practice to allow the community member to choose. Most will choose monetary compensation like grants, stipends, or cash payments.
Ultimately they should be paid at a level equal to that of the mainstream institutional experts (designers and developers) who are being paid to do the same work alongside the community members. Remember to compensate them for travel and child care when needed.
RPD is an opportunity for the government to lead the way. The private sector can make money without equitably serving everyone, so it has no incentive to do so. Nonprofits do not carry the level of influence the federal government carries. The federal government has more money to engage in this work than state or local governments. The federal government has a mandate to be equitable in its products and services and their delivery, and if this goes well, the government can make a law mandating organizations in the private and nonprofit sector to do the same work to transform. The government has a long history of using policy and services to discriminate against various underutilized groups. So the federal government should be the first one to use RPD to move towards equity. Ultimately the federal government has a huge influence on the lives of citizens, immigrant residents, and refugees, and the opportunity is great to move us toward equity.
Embedding RPD in government products and services should also be done at the state and local level. Each level will require different memos due to the different mechanics, budgets, dynamics, and policies. The hope is that RPD work at the federal government can help spark RPD work at various state, local, and county governments.
Possible first steps include:
- Mandate that all use-inspired research, including design research for policy, products, and services, be reviewed by a Community Review Board (CRB) for approval.
- If not approved, the research, design, and development cannot move forward.
- Only mandate all government-funded, use-inspired research be conducted using RPR. Focusing on research funding alone shifts the payment of RPR community teams to the grant recipients, only.
- Mandate all government-funded, use-inspired research use RPR and all contracted research, design, development, and delivery of government products and services uses RPD.
- Focusing on research funding and contracted product and service work shifts the payment of RPR and RPD community team members to the grant recipients, vendors, and contract partners.
- Choose a pilot agency, like NIH, to start.
- Start with all HISPs instead of all federal government agencies.
Use RPD and RPR as the implementation strategy for only implementing the Executive Order on Transforming the Customer Experience, which focuses on the HISPs.
- Start with a high-profile set of projects such as the OMB life experience projects.
Then, later we can advance to an entire pilot agency.
- Focus on embedding equity measures in CX.
After equity is embedded in CX, start by choosing a pilot agency, benchmarking equity and CX, piloting RPD, and measuring the change attributable to RPD.
This allows time to build more evidence.
There are many existing case studies of participatory design.
- Decolonizing Participatory Design: Memory Making in Namibia
- Toward a more just library: Participatory design with Native American students
- Crossing Methodological Borders: Decolonizing Community-Based Participatory Research
- Different eyes/open eyes
- A Case Study Measuring the Impact of a Participatory Design Intervention on System Complexity and Cycle Time in an Assemble-to-Order System
Also there are also case studies of participatory design in the public sector.
In modern product and service development, products and services never convert into an operations and maintenance phase alone. They are continually being researched, designed, and developed due to continuous changes in human expectations, migration patterns, technology, human preferences, globalization, etc. If community members were left out of research, design, and development work after a service or product launches, then the service or product would no longer be designed and developed using an RPD approach. As long as the service or product is active and in service, radical participation in the continuous research, design, and development is needed.
Protecting Civil Rights Organizations and Activists: A Policy Addressing the Government’s Use of Surveillance Tools
In the summer of 2020, some 15 to 26 million people across the country participated in protests against the tragic killings of Black people by law enforcement officers, making it the largest movement in US history. In response, local and state government officials and federal agencies deployed surveillance tools on protestors in an unprecedented way. The Department of Homeland Security used aerial surveillance on protesters across 15 cities, and several law enforcement agencies engaged in social media monitoring of activists. But there is still a lot the public does not know, such as what other surveillance tactics were used during the protests, where this data is being stored, and for what future purpose.
Government agencies have for decades secretly used surveillance tactics on individual activists, such as during the 1950s when the FBI surveilled human rights activists and civil rights organizations. These tactics have had a detrimental effect on political movements, causing people to forgo protesting and activism out of fear of such surveillance. The First Amendment protects freedom of speech and the right to assemble, but allowing government entities to engage in underground surveillance tactics strips people of these rights.
It also damages people’s Fourth Amendment rights. Instead of agencies relying on the court system to get warrants and subpoenas to view an individual’s online activity, today some agencies are entering into partnerships with private companies to obtain this information directly. This means government agencies no longer have to meet the bare minimum of having probable cause before digging into an individual’s private data.
This proposal offers a set of actions that federal agencies and Congress should implement to preserve the public’s constitutional rights.
- Federal agencies should disclose what technologies they are using, how they are using them, and the effect on civil rights. The Department of Justice should use this information to investigate agencies and ensure their practices aren’t violating the public’s civil rights,
- The Office of Science and Technology Policy and the Department of Justice should work with the Office of the Attorney General to revise Attorney General Guidelines for the FBI.
- Congress should pass the Fourth Amendment Is Not For Sale Act.
- Congress should amend the Stored Communications Act of 1986 to compel companies to ensure user data isn’t sold to third parties who will then sell user data to government entities.
- Congress should pass border search exception legislation.
Challenges and Opportunities
Government entities have been surveilling activists and civil rights organizations long before the 2020 protests. Between 1956 and 1971, the FBI engaged in surveillance tactics to disrupt, discredit, and destroy many civil rights organizations, such as the Black Panther Party, American Indian Movement, and the Communist Party. Some of these tactics included illegal wiretaps, infiltration, misinformation campaigns, and bugs. This program was known as COINTELPRO, and the FBI’s goal was to destroy organizations and activists who had political agendas that they viewed as radical and would challenge “the existing social order.” While the FBI didn’t completely achieve this goal, their efforts did have detrimental effects on activist communities, as members were imprisoned or killed for their activist work, and membership in organizations like the Black Panther Party significantly declined and eventually dissolved in 1982.
After COINTELPRO was revealed to the public, reforms were put in place to curtail the FBI’s surveillance tactics against civil rights organizations, but those reforms were soon rolled back after the September 11 attacks. Since 9/11, it has been revealed, mostly through FOIA requests, that the FBI has surveilled the Muslim community, Occupy Wall Street, Standing Rock protesters, murder of Freddie Gray protesters, Black Lives Matter protests, and more. Today, the FBI has more technological tools at their disposal that make mass surveillance and data collection on activist communities incredibly easy.
In 2020, people across the country used social media sites like Facebook to increase engagement and turnout in local Black Lives Matters protests. The FBI’s Joint Terrorism Task Forces responded by visiting people’s homes and workplaces to question them about their organizing, causing people to feel alarmed and terrified. U.S. Customs and Border Protection (CBP) also got involved, deploying a drone over Minneapolis to provide live video to local law enforcement. The Acting Secretary of CBP also tweeted out that CBP was working with law enforcement agencies across the nation during the 2020 Black Lives Matter Protests. CBP involvement in civil rights protests is incredibly concerning given its ability to circumvent the Fourth Amendment and conduct warrantless searches due to the border search exception. (Federal regulations and federal law gives CBP the authority to conduct warrantless searches and seizures within 100 miles of the U.S. border, where approximately two-thirds of the U.S. population resides.)
The longer government agencies are allowed to surveil people who are simply organizing for progressive policies, the more people will be terrified to voice their opinion about the state of affairs in the United States. This has had detrimental effects on people’s First and Fourth Amendment rights and will continue to have even more effects as technology improves and government entities have access to advanced tools. Now is the time for government agencies and Congress to act to prevent further abuse of the public’s rights to protest and assemble. A country that uses tools to watch its residents will ultimately lead to a country with little to no civic engagement and the complete silencing of marginalized communities.
While there is a lot of opportunity to address mass surveillance and protect people’s constitutional rights, government officials have refused to address government surveillance for decades, despite public protest. In the few instances where government officials put up roadblocks to stop surveillance tactics, those roadblocks were later removed or reformed so as to allow the previous surveillance to continue. The lack of political will of Congressmembers to address these issues has been a huge challenge for civil rights organizations and individuals fighting for change.
Plan of Action
Regulations need to be put in place to restrict federal agency use of surveillance tools on the public.
Recommendation 1. Federal agencies must disclose technologies they are using to surveil individuals and organizations, as well as the frequency with which they use them. Agencies should to publish this information on their websites and produce a more comprehensive report for the Department of Justice (DOJ) to review.
Every six months, Google releases the number of requests it receives from government agencies asking for user information. Google informs the public on the number of accounts that were affected by those requests and whether the request was a subpoena, search warrant, or other court order. The FBI also discloses the number of DNA samples it has collected from individuals in each U.S. state and territory and how many of those DNA samples aided in investigations.
Likewise, government agencies should be required to disclose the names of the technologies they are purchasing to surveil people in the United States as well as the number of times they use this technology within the year. Government entities should no longer be able to hide which technologies their departments are using to watch the public. People should be informed on the depth of the government’s use of these tools so they have a chance to voice their objections and concerns.
Federal agencies also need to publish a more comprehensive report for the DOJ to review. This report will include what technologies were used and where, what category of organizations they were used against, racial demographics of the people who were surveilled, and possible threats to civil rights. The DOJ will use this information to run investigate whether agencies are violating the Fourth Amendment or First Amendment in using these technologies against the public.
Agencies may object to releasing this information because of the possibility of it interfering with investigations. However, Google does not release the names of individuals who have had their user information requested, and neither should government agencies release user information. Because government agencies won’t be required to release specific information on individuals to the public, this will not affect their investigations. This disclosure request is aimed at knowing what tools government agencies are using and giving the DOJ the opportunity to investigate whether these tools violate constitutional rights.
Recommendation 2. Attorney General Guidelines should be revised in collaboration with the White House Office of Science and Technology Policy (OSTP) and civil rights organizations that specialize in technology issues.
The FBI has used advanced technology to watch activists and protests with little to no government oversight or input from civil rights organizations. When conducting an investigation or assessment of an individual or organization, FBI agents follow the Attorney General Guidelines, which dictate how investigations should be conducted. Unfortunately, these guidelines do little to protect the public’s civil rights—and in fact contain a few provisions that are quite problematic:
- The FBI is able to conduct assessments, which don’t require factual basis but instead require an authorized purpose, such as obtaining information on an organization or person if it’s believed that they could be involved in activities threatening national security or suspected that they could be the target of an attack.
- Physical surveillance can be used during an assessment for a limited time, but that period has been redacted in the guide so it’s not clear how long they can engage in this practice.
- FBI employees can conduct internet searches of “publicly available information” for an authorized purpose without having a lead, tip, referral, or complaint. FBI employees can also use online services to obtain publicly available information before the employee even decides to open an assessment or formal investigation. FBI employees are not required to seek supervisor approval beforehand.
These provisions are problematic for a few reasons. FBI employees should not be able to conduct assessments on individuals without a factual basis. Giving employees the power to pick and choose who they want to assess provides an opportunity for inherent bias. Instead, all assessments and investigations should have some factual basis behind them and receive approval from a supervisor. Physical surveillance and internet searches, likewise, should not be conducted by FBI agents without probable cause. Allowing these kinds of practices opens the entire public to having their privacy invaded.
These policies should be reviewed and revised to ensure that activists and organizations won’t be subject to surveillance due to internal bias. President Biden should issue an executive order directing OSTP to collaborate with the Office of the Attorney General on the guidelines. OSTP should have a task force dedicated to researching government surveillance and the impact on marginalized groups to guide them on this collaboration.
External organizations that are focused on technology and civil rights should also be brought in to review the final guidelines and voice any concerns. Civil rights organizations are more in tune with the effect that government surveillance has on their communities and the best mechanisms that should be put in place to preserve privacy rights.
Congress also should take steps to protect the public’s civil rights by passing the Fourth Amendment Is Not for Sale Act, revising the Stored Communications Act, and passing border exception legislation.
Recommendation 3. Congress should close the loophole that allows government agencies to circumvent the Fourth Amendment and purchase data from private companies by passing the Fourth Amendment Is Not for Sale Act.
In 2008, it was revealed that AT&T had entered into a voluntary partnership with the National Security Agency (NSA) from 2001 to 2008. AT&T built a room in its headquarters that was dedicated to providing the NSA with a massive quantity of internet traffic, including emails and web searches.
Today, AT&T has eight facilities that intercept internet traffic across the world and provide it to the NSA, allowing them to view people’s emails, phone calls, and online conversations. And the NSA isn’t the only federal agency partnering with private companies to spy on Americans. It was revealed in 2020 that the FBI has an agreement with Dataminr, a company that monitors people’s social media accounts, and Venntel, Inc., a company that purchases bulk location data and maps the movements of millions of people in the United States. These agreements were signed and modified after BLM protests were held across the country.
Allowing government agencies to enter into agreements with private companies to surveil people gives them the ability to bypass the Fourth Amendment and spy on individuals with no restriction. Federal agencies no longer need rely on the courts when seeking private communications and thoughts; they can now purchase sensitive information like a person’s location data and social media activity from a private company. Congress should end this practice and ban federal government agencies from purchasing people’s private data from third parties by passing the Fourth Amendment Is Not For Sale Act. If this bill passed, government agents could no longer purchase location data from a data broker to figure out who was in a certain area during a protest or partner with a company to obtain people’s social media postings without going through the legal process.
Recommendation 4. Congress should amend the Stored Communications Act of 1986 (SCA) to compel electronic communication service companies to prove they are in compliance with the act.
The SCA prohibits companies that provide an electronic communication service from “knowingly” sharing their stored user data with the government. While data brokers are more than likely excluded from this provision, companies that provide direct services to the public such as Facebook, Twitter, and Snapchat are not. Because of this law, direct service companies aren’t partnering with government agencies to sell user information, but they are selling user data to third parties like data brokers.
There should be a responsibility placed on electronic communication service companies to ensure that the companies they sell user information to won’t sell data to government entities. Congress should amend the SCA to include a provision requiring companies to annually disclose who they sold user data to and whether they verified with the third party that the data will not be eventually sold to a government entity. Verification should require at minimum a conversation with the third party about the SCA provision and a signed agreement that the third party will not sell any user information to the government. The DOJ will be tasked with reviewing these disclosures for compliance.
Recommendation 5. Congress should pass legislation revoking the border search exception. As stated earlier, this exception allows federal agents to conduct warrantless searches and seizures within 100 miles of the U.S. border. It also allows federal agents to search and seize digital devices at the border without having any level of suspicion that the traveler has committed a crime. CBP agents have pressured travelers to unlock their devices to look at the content, as well as downloaded the content of the devices and stored the data in a central database for up to 15 years.
While other law enforcement agencies are forced to abide by the Fourth Amendment, federal agents have been able to bypass the Fourth Amendment and conduct warrantless searches and seizures without restriction. If federal agents are allowed to continue operating without the restrictions of the Fourth Amendment, it’s possible we will see more instances of local law enforcement agencies calling on CBP to conduct surveillance operations on the general public during protests. This is an unconscionable amount of power to give to agencies that can and has led to serious abuse of the public’s privacy rights. Congress must roll back this authority and require all law enforcement agencies—local, state, and federal—to have probable cause at a minimum before engaging in searches and seizures.
Conclusion
For too long, government agencies have been able to surveil individuals and civil rights organizations with little to no oversight. With the advancement of technology, their surveillance capabilities have grown tremendously, leading to near 24/7 surveillance. Regulations must be put in place to restrict the use of surveillance technologies by federal agencies, and Congress must pass legislation to protect the public’s constitutional rights.
The FBI operates under the jurisdiction of the DOJ and reports to the Attorney General. The Attorney General has been granted the authority under U.S. Codes and Executive Order 12333 to issue guidelines for the FBI to follow when they conduct domestic investigations. These are the Attorney General Guidelines.
This bill was introduced by Senators Ron Wyden, Rand Paul, and 18 others in 2021 to protect the public from having government entities purchase their personal information, such as location data, from private companies rather than going through the court system. Instead, the government would be required to obtain a court order before they getting an individual’s personal information from a data broker. This is a huge step in protecting people’s private information and stopping mass government surveillance.