
In Honor of Patient Safety Day, Four Recommendations to Improve Healthcare Outcomes
Through partnership with the Doris Duke Foundation, FAS is working to ensure that rigorous, evidence-based ideas on the cutting edge of disease prevention and health outcomes are reaching decision makers in an effective and timely manner. To that end, we have been collaborating with the Strengthening Pathways effort, a series of national conversations held in spring 2025 to surface research questions, incentives, and overlooked opportunities for innovation with potential to prevent disease and improve outcomes of care in the United States. FAS is leveraging its skills in policy entrepreneurship, working with session organizers, to ensure that ideas surfaced in these symposia reach decision-makers to drive impact in active policy windows.
On this World Patient Safety Day 2025, we share a set of recommendations that align with the National Quality Strategy of Centers for Medicare and Medicaid Services (CMS) goal for zero preventable harm in healthcare. Working with Patients for Patient Safety US, which co-led one of Strengthening Pathways conversations this spring with the Johns Hopkins University Armstrong Institute for Patient Safety and Quality, the issue brief below outlines a bold, modernized approach that uses Artificial Intelligence technology to empower patients and drive change. FAS continues to explore the rapidly evolving AI and healthcare nexus.
Patient safety is an often-overlooked challenge in our healthcare systems. Whether safety events are caused by medical error, missed or delayed diagnoses, deviations from standards of care, or neglect, hundreds of billions of dollars and hundreds of thousands of lives are lost each year due to patient safety lapses in our healthcare settings. But most patient safety challenges are not really captured and there are not enough tools to empower clinicians to improve. Here we present four critical proposals for improving patient safety that are worthy of attention and action.
Challenge and Opportunity
Reducing patient death and harm from medical error surfaced as a U.S. public health priority at the turn of the century with the landmark National Academy of Sciences (NAS) report, To Err is Human: Building a Safer Health System (2000). Research shows that medical error is the 3rd largest cause of preventable death in the U.S. Analysis of Medicare claims data and electronic health records by the Department of Health and Human Services (DHHS) Office of the Inspector General (OIG) in a series of reports from 2008 to 2025 consistently finds that 25-30% of Medicare recipients experience harm events across multiple healthcare settings, from hospitals to skilled nursing facilities to long term care hospitals to rehab centers. Research on the broader population finds similar rates for adult patients in hospitals. The most recent study on preventable harm in ambulatory care found that 7% of patients experienced at least one adverse event, with wide variation of 1.8% to 23.6% from clinical setting to clinical setting. Improving diagnostic safety has emerged as the largest opportunity for patient harm prevention. New research estimates 795,000 patients in the U.S. annually experience death or harm due to missed, delayed or ineffectively communicated diagnoses. The annual cost to the health care system of preventable harm and its health care cascades is conservatively estimated to exceed $200 billion. This cost is ultimately borne by families and taxpayers.
In its National Quality Strategy, the Centers for Medicare and Medicaid Services (CMS) articulated an aspirational goal of zero preventable harm in healthcare. The National Action Alliance for Patient and Workforce Safety, now managed by the Agency for Healthcare Research and Quality (AHRQ), has a goal of 50% reduction in preventable harm by 2026. These goals cannot be achieved without a bold, modernized approach that uses AI technology to empower patients and drive change. Under-reporting negative outcomes and patient harms keeps clinicians and staff from identifying and implementing solutions to improve care. In its latest analysis (July 2025), the OIG finds that fewer than 5% of medical errors are ever reported to the systems designed to gather insights from them. Hospitals failed to capture half of harm events identified via medical record review, and even among captured events, few led to investigation or safety improvements. Only 16% of events required to be reported externally to CMS or State entities were actually reported, meaning critical oversight systems are missing safety signals entirely.
Multiple research papers over the last 20 years find that patients will report things that providers do not. But there has been no simple, trusted way for patient observations to reach the right people at the right time in a way that supports learning and Improvement. Patients could be especially effective in reporting missed or delayed diagnoses, which often manifest across the continuum of care, not in one healthcare setting or a single patient visit. The advent of AI systems provides an unprecedented opportunity to address patient safety and improve patient outcomes if we can improve the data available on the frequency and nature of medical errors. Here we present three ideas for improving patient safety.
Recommendation 1. Create AI-Empowered Safety Event Reporting and Learning System With and For Patients
The Department of Health and Human Services (HHS) can, through CMS, AHRQ or another HHS agency, develop an AI-empowered National Patient Safety Learning and Reporting System that enables anyone, including patients and families, to directly report harm events or flag safety concerns for improvement, including in real or near real time. Doing so would make sure everyone in the system has the full picture — so healthcare providers can act quickly, learn faster, and protect more patients.
This system will:
- Develop a reporting portal to collect, triage and analyze patient reported data directly from beneficiaries to improve patient and diagnostic safety.
- Redesign and modernize Consumer Assessment of Healthcare Providers and Systems
(CAHPS) surveys to include questions that capture beneficiaries’ experiences and outcomes related to patient and diagnostic safety events.
- Redefine the Beneficiary and Family Centered Care Quality Improvement Organizations (BFCC QIO) scope of work to integrate the QIOs into the National Patient Safety Learning and Reporting System.
The learning system will:
- Use advanced triage (including AI) to distinguish high-signal events and route credible
reports directly to the care team and oversight bodies that can act on them.
- Solicit timely feedback and insights in support of hospitals, clinics, and nursing homes to prevent recurrence, as well as feedback over time on patient outcomes that manifest later, e.g. as a result of missed or delayed diagnoses.
- Protect patients and providers by focusing on efficacy of solutions, not blame assignment.
- Feed anonymized, interoperable data into a national learning network that will spot systemic risks sooner and make aggregated data available for transparency and system learning.
Recommendation 2. Create a Real-time ‘Patient Safety Dashboard’ using AI
HHS should build an AI-driven platform that integrates patient-reported safety data — including data from the new National Patient Reporting and Learning System, recommended above — with clinical data from electronic health records to create a real-time ‘patient safety dashboard’ for hospitals and clinics. This dashboard will empower providers to improve care in real time, and will:
- Assist health care providers make accurate and timely diagnoses and avoid errors.
- Make patient reporting easy, effective, and actionable.
- Use AI to triage harm signals and detect systemic risk in real time.
- Build shared national infrastructure for healthcare reporting for all stakeholders.
- Align incentives to reward harm reduction and safety.
By harnessing the power of AI providers will be able to respond faster, identify patients at risk more effectively, and prevent harm thereby improving outcomes. This “central nervous system” for patient safety will be deployed nationally to help detect safety signals in real time, connect information across settings, and alert teams before harm occurs.
Recommendation 3. Mine Billing Data for Deviations from Standards of Care
Standards of care are guidelines that define the process, procedures and treatments that patients should receive in various medical and professional contexts. Standards ensure that individuals receive appropriate and effective care based on established practices. Most standards of care are developed and promulgated by medical societies. But not all clinicians and clinical settings adhere to standards of care, and deviations from standards of care are normal depending upon the case before them. Nonetheless, standards of care exist for a reason and deviations from standards of care should be noted when medical errors result in negative outcomes for patients so that clinicians can learn from these outcomes and improve.
Some patient safety challenges are evident right in the billing data submitted to CMS and insurers. For example, deviations from standards of care can be captured in billing data by comparing clinical diagnosis codes with billing codes and then compared to widely accepted standards of care. By using CMS billing data, the government could identify opportunities for driving the development, augmentation, and wider adoption of standards of care by showing variability and compliance with standards of care for patients, reducing medical error and improving outcomes.
Giving standard setters real data to adapt and develop new standards of care is a powerful tool for improving patient outcomes.
Recommendation 4. Create a Patient Safety AI Testbed
HHS can also establish a Patient Safety AI Testbed to evaluate how AI tools used in diagnosis, monitoring, and care coordination perform in real-world settings. This testbed will ensure that AI improves safety, not just efficiency — and can be co-led by patients, clinicians, and independent safety experts. This is an expansion of the testbeds in the HHS AI Strategic Plan.
The Patient Safety Testbed could include:
- Funding for independent AI test environments to monitor real-world safety and performance over time.
- Public reliability benchmarks and “AI safety labeling”.
- Required participation by AI vendors and provider systems.
Conclusion
There are several key steps that the government can take to address the major loss of health, dollars, and lives due to medical errors, while simultaneously bolstering treatment guidelines, driving the development of new transparent data, and holding the medical establishment accountable for improving care. Here we present four proposals. None of them are particularly expensive when juxtaposed against the tremendous savings they will drive throughout our healthcare system. We can only hope that the Administration’s commitment to patient safety is such that they will adopt them and drive a new era where caregivers, healthcare systems and insurance payers work together to improve patient safety and care standards.
Most patient safety challenges are not really captured and there are not enough tools to empower clinicians to improve. Here are four proposals for improving patient safety that are worthy of attention and action.
If space is there, and if we are going to climb it, then regulatory reform must be a challenge that we are willing to accept, something that we are unwilling to postpone, for a competition that we intend to win.
To better understand what might drive the way we live, learn, and work in 2050, we’re asking the community to share their expertise and thoughts about how key factors like research and development infrastructure and automation will shape the trajectory of the ecosystem.
Recognizing the power of the national transportation infrastructure expert community and its distributed expertise, ARPA-I took a different route that would instead bring the full collective brainpower to bear around appropriately ambitious ideas.