
Creating Transparency and Fairness in Automated Decision Systems for Administrative Agencies
Summary
Artificial intelligence is increasingly being used to make decisions about human welfare. Automated decision systems (ADS) administer U.S. social benefits programs—such as unemployment and disability benefits—across local, state, and Federal governments. While ADS have the potential to enable large gains in efficiency, they also run a high risk of reinforcing the class- and race-based inequities of the status quo. Additionally, the use of these systems is not transparent, often leaving individuals with no meaningful recourse after a decision has been made. Individuals may not even know that ADS played a role in the decision-making process.
The Federal Government should take immediate action to promote the transparency and accountability of automated decision systems. Agencies must build internal technical capacity as well as data cultures centered around transparency, accountability, and fairness. The White House should require that agencies using ADS undertake a notice-and-comment process to disclose information about these systems to the public. Finally, in the long-term, Congress must pass comprehensive legislation to implement a single, national standard regulating the use of ADS across sectors and use cases.
Advancing the U.S. leadership in emerging biotechnology is a strategic imperative, one that will shape regional development within the U.S., economic competitiveness abroad, and our national security for decades to come.
Inconsistent metrics and opaque reporting make future AI power‑demand estimates extremely uncertain, leaving grid planners in the dark and climate targets on the line
As AI becomes more capable and integrated throughout the United States economy, its growing demand for energy, water, land, and raw materials is driving significant economic and environmental costs, from increased air pollution to higher costs for ratepayers.
Preempting all state regulation in the absence of federal action would leave a dangerous vacuum, further undermining public confidence in these technologies.