A National Program for Building Artificial Intelligence within Communities
Summary
While the United States is a global leader in Artificial Intelligence (AI) research and development (R&D), there has been growing concern that this may not last in the coming decade. China’s massive, state-based tech-investment schemes have catapulted the country to the status of a true competitor over the development and export of AI technologies. In response, there have been repeated calls as well as actions by the Federal Government to step up its funding of fundamental and defense AI research. Yet, maintaining our status as a global leader in AI will require not only a focus on fundamental and defense research. As a matter of domestic policy, we must also attend to the growing chasm that increasingly separates advances in state-of-the-art AI techniques from effective and responsible adoption of AI across American society and economy.
To address this chasm, the Biden-Harris Administration should establish an applied AI research program within the National Institute of Standards and Technology (NIST) to help community-serving organizations tackle the technological and ethical challenges involved in developing AI systems. This new NIST program would fill a key domestic policy gap in our nation’s AI R&D strategy by addressing the growing obstacles and uncertainty confronting AI integration, while broadening the reach of AI as a tool for economic and social betterment nationwide. Program funding would be devoted to research projects co-led by AI researchers and community-based practitioners who would ultimately oversee and operate the AI technology. Research teams would be tasked with co-designing and evaluating an AI system in light of the specific challenges faced by community institutions. Specific areas poised to benefit from this unique multi-stakeholder and cross-sectoral approach development include healthcare, municipal government, and social services.
Congress should foster a more responsive and evidence-based ecosystem for GenAI-powered educational tools, ensuring that they are equitable, effective, and safe for all students.
Without independent research, we do not know if the AI systems that are being deployed today are safe or if they pose widespread risks that have yet to be discovered, including risks to U.S. national security.
Companies that store children’s voice recordings and use them for profit-driven applications without parental consent pose serious privacy threats to children and families.
Privacy laws are only effective if they include civil rights protections that ensure personal data is processed safely and fairly regardless of race, gender, sexuality, age, or other protected characteristics.