Sacred Cows: What Did We Stop Questioning in Digital Government Delivery, but Should Now?
Over the past month, we’ve been running digital service retrospectives with more than 100 people from across the United States to help us design more effective services that lead to better outcomes for people across the country. You can read more about our project here.
We know that the biggest, boldest ideas can come from anywhere and we have deliberately taken a community-driven approach to our retrospective work. Our sessions are open to anyone who’s worked in government digital services — alums and current staff of USDS, 18F, TTS, and state and local teams who have spent years inside government trying to make things work — and we’ve given everyone a chance to contribute. The result is a lot to unpack: a huge amount of insight, frustration, pride, and hard-won experiences people shared with us. What struck us most was the tension: sometimes uncomfortable disagreement among people who had dedicated years to the same mission. That tension, it turns out, is exactly what this exercise is designed to surface.
When I landed at the US Digital Service, I immediately was inundated with stories, myths, mysteries, hero/villain narratives, and beliefs about what USDS does, how it does it, and why it works that way. Sometimes, the opinions were so strong that they came off like fact. And when I joined another agency, “the way things work” was approached as a set of truths written into legislation, rather than a series of decisions that accrued over time. Organizations built on destiny can be nearly impossible to improve — you can’t iterate on a legend, and you can’t refactor a belief.
Don’t get me wrong, much of the government — and likely most mission-driven organizations — behave like this. And even for those of us who were lucky enough to start new teams or even agencies in our lifetimes have played a part in institutional myth-making as well.
But where it gets limiting is when beliefs become fact, habits become silos, storytelling becomes rules, and we lose our ability to be a learning organization — and with it, the ability to evolve, challenge, and push forward.
So, we decided now was a good time to challenge the system of beliefs that make up government digital capacity, as well as policies and approaches that felt un-challengeable. Let’s see what rules we can rewrite and beliefs we can reset: a few sacred cows are long overdue to be put out to pasture
What we’re calling ‘sacred cows’
A sacred cow is a long-standing belief, structure, or practice that people hesitate to question — even when it may no longer be serving the mission. Surfacing these isn’t about blame. Most of these beliefs made sense at some point. Some of them were correct at the time.
The problem isn’t that these beliefs existed – it’s that they likely have become perceived facts and traveled. New people joined USDS or 18F or a state digital team and absorbed the orthodoxy without questioning it. It got baked into how teams were structured, how trust was defined, what counted as success. A field that was supposed to be about iteration stopped iterating on itself.
And now, we want to create permission to examine them with fresh eyes, so we can design something better.
We asked every group in our retros a version of the same question: What is something in digital service that feels unquestionable — but might benefit from fresh examination?
Here are some of our favorites:
“Delivery and Tech are a specialty team’s job”
We have spent fifteen years sending in outside digital teams to fix what agency leadership (including many IT leadership) didn’t understand well enough to build right in the first place. At some point the question becomes: what if we just built the competency in the system instead of relying on a “save the day” model? In 2026, all organizations are software organizations. Every Senior Executive Service-level leader making decisions about policy, procurement, service delivery, enforcement, and hiring is making technology decisions. These fields must become agile, responsive to users, multidisciplinary, and strategic—and conscious of the digital environment they operate in and create.
A common pattern is deferring delivery and technology matters to specialists and we look forward to seeing how this space changes and matures. We know we are not alone wanting to see civil service and personnel reforms and are excited to learn about future reforms recommended by colleagues like The Tech Talent Project and TechViaduct. We should expect more ambition and better delivery consciousness from our agency leaders in all roles. An operating model that relies on external fixes rather than internal competence will always be behind.
“Policy designed without users or implementation in mind can still be good policy”
This myth came up in every session, multiple times. No one wants to accept this shortcutting anymore.
Policy development and implementation should never be far from the citizens it would impact — its end-users — and the implementers who deliver those results. And yet too often, policy gets designed by academics, economists, and MPAs in isolation from the people who will have to build the system that makes it possible and the people who will have to live with its gaps or failures. Implementation and impact become an afterthought.
In many cases, this wasn’t accidental. The way digital services were scoped and deployed reinforced the myth — technologists arrived after the policy was written, the budget was allocated, and the flexibility was gone. By the time they were in the room, the important decisions had already been made without them.
One example of where tech and policy worked well together was Direct File — from the earliest days, USDS played an integral role in the policy development phase, including building early prototypes and user journeys throughout the formal policy process. Not only did the Direct File team carry that work forward into the product itself, but the multidisciplinary, cross agency relationships built during those early days supported the product through to its launch.
Today, the technology that the government connects to Americans, the data that tracks results, and the design that saves time and stress are not optional components of policy – they’re how policy actually reaches people. Policy that skips this loop doesn’t just launch slowly — it launches wrong. Moving forward, 100% of policy needs to be designed with technologists, designers, and data scientists at the table, as the necessary translators to end users.
“We can’t affect urgent change without damaging trust”
Building trust between digital teams and agency partners came up in almost every session and almost everyone agreed it mattered. Where it got interesting was the question of who we build it with, why, and can it hold us back or slow us down when time is short?
It was widely acknowledged that building trust is critical to the success of launching scaled, utilized, and beloved digital services. We also we had some really interesting discussions/debates around building trust inside of agencies. Trust was important — but alignment broke down on whether trust with agency partners is a core foundation, necessary as a form of permission, or a relationship built in practice through successful partnership. Trust can be invoked to justify waiting — or to avoid taking a position altogether. But it can also be the reason work doesn’t survive once the team leaves.
Trust gained through the work is different from trust gained in order to do the work. Moving forward, we need to acknowledge this and ensure that the relationships we build will still carry forward through – and beyond – delivery.
Compliance is a proxy for quality control
The Paperwork Reduction Act (a law meant to streamline how much data the government collects from the public) gets invoked constantly to stop user research before it starts. And the Authority to Operate process (the federal security review required before launching a system) is often a costly, never ending paperwork exercise that more often than not confuses compliance with security. Both exist for legitimate purposes, and both can be executed in ways that actually serve those purposes.
But when they’re invoked as the default answer to slow or stop work, the cost is real: user research doesn’t happen, services launch without being tested against the people who need them, and security theater replaces actual security.
Attempting to minimize burden on people and launching secure services are both good practices and, moving forward, we should look for transparent oversight that achieves this, while also ensuring the pace of delivery is not held back. We’re keen to see future recommendations from partners in the state capacity ecosystem on PRA reform and TechViaduct on oversight recommendations. When compliance becomes the goal instead of the means, the people the law was designed to protect are the ones who pay for it.
We can’t have good tools
I kind of hate myself for even writing this because it is so obvious and frustrating. But, here I am, repeating what everyone said during our sessions because we are still saying it.
Digital government professionals can use modern software to build, design, and launch great experiences… but with endless friction that makes it not worth asking. This plays out constantly: teams lack of access to modern resources and tools that allow them to work, like design software, coding environments, or even mission support tools, like Slack. The result is teams spending time on workarounds instead of the work, and talented people leaving for environments where the tools aren’t a fight to obtain.
Somehow, “we don’t have access to the tools we need” became an accepted condition of the work. It is time to reassess that. Procurement and approval processes should happen centrally to ensure that they are safe, secure, and ready to deploy in all agencies without significant delay.
The perfect team size exists!
Teams should be big!
Teams should be small!
This was another topic that came up in every session and there is no consensus on the right answer. The lack of consensus is actually the point — team size isn’t a principle, it’s a variable. Team size should be based on the problem you’re trying to solve or the outcome you want to achieve. Agency digital transformation cannot be realized with a USDS team of 2-4 people; that’s a gesture. Direct File had a product team of 75 (blended in house and vendor, but majority government employees) — that wasn’t bloat, it was what the problem required. But myths about team size calcify fast. Once ‘small and agile’ or ‘go big’ becomes part of an organization’s legend, rightsizing stops being a decision and starts being a heresy
Instead of assuming there’s a right size, identify the problem and build the appropriate team to solve it. Too often, building the team became the goal rather than the means — optimizing for org chart over outcomes. The team is not the product, the service is.
Try this at home
The Sacred Cows exercise is an effective tool for debate and to challenge our own assumptions as digital government professionals. This list could have been a lot longer, and if we want to rewrite the rules for better government, the more experiences we hear the better.
Do these resonate? What have we missed? Here were our prompts, but feel free to expand:
What is something in digital service that feels unquestionable — but might benefit from fresh examination?
- Operating models
- Talent assumptions
- Procurement habits
- Funding structures
- Risk tolerance
- Measures of success
Let’s see what rules we can rewrite and beliefs we can reset: a few digital service sacred cows are long overdue to be put out to pasture.
If properly implemented, a comprehensive reform program to accomplish regulatory democracy that is people-centered and power-conscious could be essential for addressing complex policy changes such as the climate challenge.
Rather than get caught up in the buzzword flavor of the month, the policymaking ecosystem should study what’s actually working.
The question is not whether the capital exists (it does!), nor whether energy solutions are available (they are!), but whether we can align energy finance quickly enough to channel the right types of capital where and when it’s needed most.