Packaging Semiconductors at the Doorstep to Disney
This interview is part of an FAS series on Unleashing Regional Innovation where we talk to leaders building the next wave of innovative clusters and ecosystems in communities across the United States. We aim to spotlight their work and help other communities learn by example. Our first round of interviews are with finalists of the Build Back Better Regional Challenge run by the Economic Development Administration as part of the American Rescue Plan.
BRIDG is not-for-profit public-private partnership located in Osceola County, Florida providing semiconductor R&D and production capabilities to industry and government, enabled by a versatile 200mm microelectronics fabrication facility (or ‘fab’). As the anchor tenant of an emerging 500-acre technology district known as NeoCity, BRIDG operates the fab with a focus on heterogeneous integration and advanced packaging process technologies, and the development of a secure manufacturing methodology leveraging digital twins and other Industry 4.0 concepts.
Supported by Osceola County, Florida High Tech Corridor Council, imec, SkyWater Technology (Center for Neovation operator) and others, BRIDG connects challenges and opportunities with solutions. They are “Bridging the Innovation Development Gap” (BRIDG) making commercialization possible. Their coalition, led by Osceola County Board of Commissioners, won $50.8 million in September, 2022 as part of the challenge.
Jim Vandevere is the President of BRIDG. His career spans over 30 years of C-level experience with telecom companies, medical devices, and startups in photonics and electronics.
Ryan Buscaglia: I would love to start out and ask if you could tell us about how your coalition came together and a little bit of the history of NeoCity?
Jim Vandevere: Historically, Osceola County has been a region known for its strength in the services and agriculture industry. You either work in theme parks, restaurants, or farming— such as on a cattle ranch or something like that. During 2008, and more recently during COVID, there was a down-tick in the market where things changed, and people didn’t travel much. Osceola County specifically ranked about second in the country in unemployment. They recognized the importance of having a diversified portfolio for people to work in and wanted to add a third leg to the revenue stool, which is basically a technology component that survives downturns in markets.
Osceola County looked at what innovative technologies were being conducted within Florida’s universities (University of Central Florida, University of Florida, University of South Florida) that can be leveraged, and along with leadership from the Florida High Tech Corridor Council and Orlando Economic Partnership, felt that the semiconductor industry was an area where Florida, and particularly Osceola County in Central Florida, can truly make an impact on a national scale. After a few years, SkyWater Technologies also became involved in the regional efforts. So, for nearly the past decade, the group has been working closely together and eventually formed a coalition to grow an ecosystem on a 540-acre plot of land owned by Osceola County called NeoCity. Since the start of the economic diversification initiative, the county has invested approximately $270 million dollars to build a 200mm microelectronics fabrication facility (or ‘fab’), built a STEM-based high school, and added a Class A office complex onsite. With local and state funding, along with collaborations from industry partners such as TEL and SUSS Microtec, BRIDG was able to add tools to operate the fab as you see it today. Our cluster has been meeting twice a week since before the Build Back Better Regional Challenge was even a thing. We’ve been interacting for a long time.
There’s a real sense of community here. Not just within Osceola County, but along the entire Florida high tech corridor from the Space Coast to Tampa—everybody’s interested and invested (whether directly or indirectly) in the success of this environment and the NeoCity campus.
How did Osceola County make the decision to pursue semiconductors as a cluster instead of another industry, and who was making that decision? Was the county leading on that, or were there other people at the table?
The idea for NeoCity started in 2012 after a “best practice” regional leadership mission to Texas and discussions around the importance of the semiconductor industry. There were other people at the table. Everybody had an opinion. The difficult question was, “How do you fit $278 million into your budget to make this program work in Central Florida?” Osceola County’s response was, “We can make this happen. We’re the fastest growing county in the state of Florida.” They decided to take a big leap of faith with some sound financial planning, knowing that it could deliver huge dividends and a huge tax base with employees. Most semiconductor fabs average between 300 and 1000 people or 5000 people depending on size. And these people—ranging from PhDs to high school graduates—make a whole different level of income. In fact, the high school graduates are making significantly more than the average salary of someone working in agriculture or hospitality.
From a high school graduation point of view and continuing education into college or higher ed, Osceola County had ranked 61st out of 67 school districts in the state of Florida. Many students weren’t moving on to college. So, the county implemented a program two years ago to fund high school seniors from Osceola County to attend Valencia College. They paid for school—strategic workforce development stuff—and now they’ve moved up to 17th in a short window of time. That’s a huge jump!
Could you talk a little bit more about how Osceola came to focus on education and workforce as a key component? How did the high school associated with NeoCity come to be?
As soon as we made the decision to move toward semiconductors, we understood that it takes a different level of student. It takes a STEM student, somebody who’s interested in science, technology, engineering, and math. At the time, the county didn’t have that niche focus within their existing high school system. So, they proactively built a STEM-focused, project-based high school next door to the office complex at NeoCity (NeoCity Academy). That STEM high school started four years ago and graduated its first class last year in May. Nearly 100% of the students in that graduating class went on to colleges all over the country. It’s a serious testament to the academic excellence they achieved.
Osceola County had the planning and the foresight to ask, “How do we get our students more involved from the beginning?” This went all the way down to science fairs in middle schools and even science awareness outreach in elementary schools. We actively engage and support science fairs in Osceola County that target these students to get kids and parents to understand the technology that we’re developing. SkyWater, BRIDG, NeoCity Academy, Osceola County, the Florida High Tech Corridor, and imec are all actively involved in support of education in this community.
So, we’ve talked about a couple of different groups involved. We’ve talked about education. We’ve talked about the role of the county. But how does the coalition approach working with other stakeholders?
As a not-for-profit organization, BRIDG interfaces with a lot of universities. Mostly universities in Florida. However, we work with universities across the United States like UC Davis, Stanford, MIT, Georgia Tech, ASU, and Ohio State. I will tell you that one of the biggest accomplishments from an outside point of view is that imec, a Belgian company, chose Florida to open their U.S. headquarters. They basically solidified the vision of the county because they are a very advanced engineering company based out of Leuven where they have a 300mm fab and a lot of capabilities in design. They work with Fortune 500 companies all over the world. And their U.S. headquarters is right here at NeoCity in Osceola County. That interface and that linkage validates what Osceola County wanted to do and continues to do every day.
From an industry partnership standpoint, we have two equipment suppliers that are ranked in the top five equipment suppliers of all semiconductor fabs—big multi-billion-dollar companies—that use our site as a demo center for their U.S. capabilities.
Locally, we work with regional industry associations such as the Manufacturing Association for Central Florida, FloridaMakes, and the Florida Photonics Cluster. We have a good relationship with Valencia College, and then we have our partner school, the Institute for Simulation and Training at UCF, which is globally renowned for their modeling, simulation, and training program. Then, we also have ongoing project collaborations with the University of Florida, as well. So, we have a really solid record pulling in local as well as international friendlies into the site to make it successful.

Between Kissimmee and St. Cloud, NeoCity Academy is one of community’s current tenants. Students engage in inquiry-driven, project-based learning, aiming to enter the workforce with enhanced technical skills.
Of the research institutions you mentioned: I know one of the key projects in your Build Back Better proposal is working with UCF on creating digital twins of the facility. Could you talk about what motivated that and why you thought that that was an important project?
It’s definitely an important project. The Air Force funded secure digital twins for DoD applications to virtually show the pipeline of activities from design all the way through to where the product winds up—showing sourcing components to fab processing, all the fab machines and everything like that and all the decision points. It’s basically taking snapshots to understand design attributes, metrology testing, inspections, etc. all the way into final test assembly and then out to the end use product.
Now how is UCF involved? They are modeling and simulating how components go through the process. So, as you go through each tool and it performs its function, they’re pulling data from that performance to understand the process. In the end, we will understand the entire flow. Once you understand that flow, you will then know what the variables are. Doing a simulation of ‘what if’ scenarios can help industry understand costs and what kind of tool sets you need to do certain things. Then, you can do a predictive plan model saying ‘hey, it’s going to cost you this much money. You’re going to need this many people, and this is what you need.’ It’s already being done for most industries, for example the automotive industry, but this is the first semiconductor play.
Taking a step back from the projects that you’re working on and going back to how you originally came up with the plan for the Build Back Better grant, I would love to know who actually did the work to write this Build Back Better Regional Challenge application. Who took the lead and how did you decide that they would take the lead? Was it natural or did it come about from a consensus process?
It came about from a consensus process. The detailed technical information came mostly from BRIDG and the overarching documentation came from a grant writing group and Osceola County staff. This direction and the knowledge of how to submit this grant was very important.
The overall vision of how we present ourselves came from our normal meetings with the county manager, the team, and me. The creation of the grant was a complete team effort. Honest, straightforward questions, challenges, and feedback came up amongst the coalition group while we developed this. We learned a lot more about each other and how every individual group does business. We went through six months of hard grinding to get that grant to be representative of the coalition.
It sounds like the recipe called for not only deep technical expertise, but also an open and collaborative group working on it. And then a clear vision that tied all of the disparate parts together. Are those the main ingredients?
Yes, and I would add understanding and knowledge of grant writing and what the reviewers are looking for.
Even the best teams have disagreements sometimes. Are there any moments where your coalition had different ideas about what the vision for the future was? And how did you reconcile those situations?
As a kid, my father told me there’s no ‘I’ in team. We win together, we lose together. But there are always ‘me’ people in a group. Getting those dynamics out of the way when you first start is important. The best part about our group is that we have a common vision of making NeoCity work. That was the underlying tone. Their ideas were maybe not like yours or anybody else’s, but everybody’s heart was in the right place. The challenge was how you communicate to get everybody to understand your idea. That is why our coalition is successful. We’re not afraid to have honest conversations and disagreements, but we work through it.
How will Osceola County look different in 10 years if you’re successful and doing what you all proposed? What does it look like for the class of 2033 coming out of NeoCity Academy or for the families who can rely on working in the fab or someplace adjacent?
In 10 years…I think the opportunity for NeoCity is like in many places across the United States. Today is when you’re going to see the most funding ever. You’re not going to see this again, probably, for another decade. However, I would say that there is nothing stopping the growth of NeoCity as long as we execute the way we’ve been executing and hold true to our vision. I will say 10 years from now, we will have successfully created the Design Center of Excellence to attract more companies, in addition to the ones you see here today. You will probably see seven or eight more companies in a group that support leapfrog advanced studies and work on that as a collaborative effort.
Additionally, we’ve put significant funds toward workforce development including our local community college system. Our goal is to increase STEM capabilities and provide equipment, information, and better tools to support Osceola County. We want to continue the support of the Osceola County gift that has provided a Valencia College education for every high school senior that graduated in the Osceola County system over the last two years. We can supplement that and create a more robust avenue that connects students not only to the existing Valencia College environment, but also to all universities throughout the state of Florida. The goal is to elevate the entire state. Staffing in the semiconductor industry right now is tough— everybody’s complaining about it across the country. They can’t get the workforce. In Florida, we control our own destiny. That’s what I hope to see in 10 years.
The Ghost Guns Haunting National Crime Statistics
There are over 350 million guns in the United States, and an unknown number that are completely untraceable. The proliferation of privately made firearms, also known as ghost guns, has contributed to the highest rate of firearm-related homicides in 25 years. Non-serialized and inexpensive, ghost guns have emerged as a cataclysmic issue in the violence epidemic in our nation.
In his 2022 State of the Union address, President Biden outlined a comprehensive gun strategy that included an effort to help stop the propagation of ghost guns. Eleven states have adopted regulations for ghost guns, though much more is needed to curb the current grave issues with these types of firearms. Federally-approved standardized training needs to be provided to law enforcement officers so they can properly identify unserialized weapons. Law enforcement agencies need to update case management systems to allow for the real-time tracking necessary to determine ghost gun involvement in crimes and how laws and enforcement efforts are curbing their use.
Without serial numbers or other traceable features on the gun frame, slide, or other components, tracking weapon movement from sales and thefts is impossible. Casings recovered from shooting scenes can be tracked nationally through the National Integrated Ballistic Information Network, utilizing the individuality of firing pins on casings and linking casings from different scenes to one weapon. Even with tracking capabilities from casings, ghost guns create significant investigative and safety challenges, especially since most of the ghost weapons authorities are able to seize are possessed by persons prohibited from owning a firearm.
Unlike commercially-made serialized firearms, ghost guns circumvent traditional background checks, convicted felon restrictions, and waiting periods since they are sold as components rather than a completed gun. Some components of ghost guns can also be 3-D printed with readily available online instructions, or milled, where tools are used to drill weapon components. After being denied a traditional firearm purchase two years earlier, a 23-year-old obtained parts and instructions and built a ghost gun, later using the weapon to kill five people in Santa Monica, CA.
Ghost guns are not new, with assembly kits being available since the 1990s. The increasing ease of internet sales has made obtaining the weapons easier than ever. The component nature of the assembly kits allowed firearms sellers to capitalize on legal loopholes by selling unfinished receivers for assault-style rifles, bypassing the ban on assault rifles in California and other states. Ghost guns are sought after by violent extremists, felons, and persons prohibited from legally possessing firearms.
There have been over 37,000 ghost guns recovered since 2017, with a 1083% increase in recoveries from 2017-2021. The recovery of these firearms is likely underreported, with many law enforcement agencies not having the reporting tools or training required to recognize and trace unserialized weapons. Recognizing the dangers of ghost guns and their unrestricted nature, the Biden Administration has supported new laws to serialize existing and future privately made firearms, require background checks for gun kit purchases, and require manufacturers to be federally licensed. The Bureau of Alcohol, Tobacco, Firearms, and Explosives also redefined gun components in April 2022 to be more inclusive of the new types of weapons produced and require serialization of vital components.
Ghost guns continue to be one of the biggest challenges to fighting gun violence. An increase in training law enforcement officers to recognize and adequately track ghost guns will assist in data collection, and priority should be placed on ensuring compliance with new laws. As technology changes and other firearm-type components emerge, the government must remain apprised of future threats to public safety and provide resources to research this phenomenon and reduce the danger to the community.
Cultural Burning: How Age-Old Practices Are Reshaping Wildfire Policy
The Wildland Fire Mitigation and Management Commission called for input from diverse stakeholders and FAS, along with partners Conservation X Labs (CXL), COMPASS, and the California Council on Science and Technology (CCST), answered the call.
Recruiting participants from academia, the private sector, national labs, and other nonprofits, the Wildland Fire Policy Accelerator produced 24 ideas for improving the way the country lives with wildland fire.
‘Cultural Burning’ is a phrase that is cropping up more and more in wildland fire policy discussions, but it’s still not widely understood or even consistently defined.
Liam Torpy of Conservation X Labs sat down with FAS to discuss why ‘cultural burning’ is garnering more attention in the world of wildfire mitigation and management.
FAS: Liam – thanks for joining us. To start, just give us a quick introduction to Conservation X Labs and its mission.
LT: The founders of Conservation X Labs [Paul Bunje and Alex Dehgan] wanted to create a conservation technology organization that, you know, isn’t just doing the same traditional conservation methods of protected areas and command and control. CXL wants to find innovative solutions to these problems that can harness market forces or that develop new technologies that will allow for breakthroughs–because the problems have been increasing exponentially in the conservation field, but the solutions haven’t kept pace. We’re not, in a lot of these critical ecosystems like in the American West with wildfire, or the Amazon, were simply not doing enough. And the problem is getting worse as global forces, like climate change, worsen the problem.
FAS: CXL has been convening what you call “Little Think” events – roundtable discussions aimed at surfacing new ideas in the area of wildfire management – when you decided to partner with FAS on this Wildland Fire Policy Accelerator. Cultural burning became one of the big areas of focus for the recommendations coming out of this process. Some people may be familiar with the idea of “prescribed burning” – using fire to reduce the risk of uncontrolled megafires down the road – but ‘cultural burning’ is something quite different. Can you explain what’s different and why it’s important?
LT: You can read a lot of reports, or see some statutes on the books, legally, that will oftentimes not reference cultural burning at all. Some do – but it’s kind of a footnote that’s put under ‘prescribed burning’ – many publications treat it the same way. But prescribed burning, which can have real ecological benefits, is often only measured by the government using acreage: how much land can we burn?
With cultural burning, there’s not a single definition, because each Tribe has their own version of it. But it’s often to cultivate natural resources or encourage new growth of a particularly important plant. So it’s much more targeted than prescribed burning – it’s suited to the land and the resources a Tribe has. It’s deeply rooted in place-based knowledge.
It’s also a very important method of intergenerational knowledge transfer as well. [Cultural fire practitioners] say sometimes that ‘when you burn together and you learn together’. It’s a way to teach the rest of your group of what resources there are, how to steward them, and how everybody is coming together to manage the land and take care of it.
FAS: So why is there a tension between traditional federal and state fire management methods and cultural burning?
A lot of people I think don’t really recognize this: you think that because a lot of Tribes have reservations, or Tribal trust land or some of their own free land, they can just go and burn as they wish. But the people on the ground that we’ve talked to, including some participants in this accelerator, say Tribal trust land is some of the hardest land to burn on. It’s pretty much considered federal land, administered by the Bureau of Indian Affairs (BIA). That means pretty much every time you want to burn on the land, you have to have a burn plan and submit that to the BIA, which is generally very understaffed. Only one person may be looking over those documents. Then a BIA ‘burn boss’ is considered the only person qualified to actually lead the burn – and that is already kind of infringing on the sovereignty of the tribe itself: having their own burn led by this outsider within the federal government. And oftentimes you have to go through a NEPA (National Environmental Policy Act) permitting process which is a very long and expensive process that requires public comments. There are local air districts that regulate smoke. Then you have to have an approved burn window where they say, okay, the conditions are good. And that often happens very rarely. And so a lot of tribes don’t even attempt to go through this whole process. It’s simply too much administrative burden on them.
FAS: And it’s not just the administrative burden, right? There seems to be some real hesitancy to allowing more cultural burning from the agencies who manage this land, and from communities nearby. Why is that?
LT: The public is often skeptical of both prescribed and cultural burning. They’re scared of fire because of all the megafires. So it’s can be hard to get the public support sometimes. And because of that a lot of these federal agencies that by their nature are very risk averse. They’re unwilling to move forward with some of these plans that can be perceived as risky when it’s easier just to do nothing. Their approach is just when a fire comes through, try to fight it. Say you did the best you could even though it burns down half the forest and becomes a high severity fire.
FAS: Tell us about the Accelerator participants you worked with.
LT: We talked with Nina Fontana, Chris Adlam, Ray Guttierez, and then [FAS’] Jessica Blackband worked with Kyle Trefny and Ryan Reed. Ryan and Ray are both members of Tribes, and the others non-Indigenous, but working in that sphere and trying to support cultural fire. These are already busy people, trying to kind of reestablish some of these traditions and fighting against these institutional barriers. Their first priority may not be to fly out to Washington to talk with federal policymakers or sit down at their computer and develop and research these recommendations. But they have a really deep on-the-ground perspective that a lot of people in Washington that don’t have, and that a lot of people the Commission don’t have.
FAS: Can you give us an example of what kinds of recommendations emerged from the process?
LT: One thing that’s important to understand is that these recommendations are not the be all and end all of this issue. These are steps – often the most basic steps we can take to start to give cultural fire the respect and the place it deserves with fire management. Fire has been functionally banned from the land for over a century – over a century of extreme fire suppression tactics in the American West. A lot of these tribes that previously had been burning for centuries, or sometimes even millennia, weren’t allowed to continue that cycle. It was illegal – it was criminalized. And so that knowledge is just lost. And so some tribes are seeking to regain that knowledge.
There’s a Tribal Ranger Program recommended by Chris Adlam – which is modeled after Canada and Australia – creating permanent long term opportunities for Tribal members to exercise their traditions, to put fire on the land to build up that intergenerational knowledge. These would not be just short-term, one-summer, internship opportunities, but real employment opportunities that allow them to put fire on the land.
Another important recommendation, from Raymond Guttierez, is establishing a federal definition of ‘cultural fire’ and ‘cultural fire practitioner’. Right now, there’s not even really a legally recognized definition for the very practice itself – only for prescribed burning. And it wouldn’t just be one definition, it’d be regionally specific. And Tribes would help develop that in each area.
FAS: What part of the process was most rewarding for you, personally?
LT: I think one of the things that was rewarding is that these participants, in the beginning, were a little skeptical that what they had to say would actually be important, or would be more useful than the information that decision makers in Washington already had at their disposal. But they really did have a lot to say and a lot to contribute to this national conversation. And so I think it was really cool to see just how, by the end, they got validation that they have really useful information and experience that needs to be heard by people in power.
FAS: The Biden Administration has made a point of incorporating Indigenous knowledge into federal decision-making. But guidance from the Executive Branch is one thing – real impact on the ground is another. Do you think Indigenous practices, like cultural burning, are actually gaining support in the communities affected by wildfire?
LT: I think there’s also a broader movement within our society focused on diversity and equity and inclusion. Looking at the historical injustices that Tribes have faced, and trying to give them compensation when they do participate in these processes, and give their input and share their traditional knowledge – we need to make sure we are adequately valuing that. And so I think that’s also another element that’s giving this a boost. Hopefully, we see more and more people in power incorporating these ideas. And really, it’s not just about them incorporating the ideas – it’s about allowing Tribes to lead this movement, and to lead these burns. Some of it is just getting out of their way. Some of it is giving them more of a platform. But what we don’t want is just for the system in place to kind of co-opt the Tribal practices and leave the cultural fire practitioners in the dust.
But I also think having the White House make that statement about Indigenous knowledge is really significant. By getting encouragement from the top that [agencies] should look into cultural burning, or look into place-based knowledge and traditional ecological management, that kind of gives them more of a push to go and form these partnerships. And I think there’s been, there’s more and more attention on these issues. As we look at the wildland fire crisis right now, it’s going out of control. The amount of money that we’re spending on it – asking questions about whatever we’ve been doing for the last century or so is warranted. Before that century of suppression, tribes were getting more fire on the ground. People are looking at this more and more, trying to learn, and giving it the respect that it really deserves, and the attention that it deserves.
Systems Thinking In Entrepreneurship Or: How I Learned To Stop Worrying And Love “Entrepreneurial Ecosystems”
As someone who works remotely and travels quite a long way to be with my colleagues, I really value my “water cooler moments” in the FAS office, when I have them. The idea for this series came from one such moment, when Josh Schoop and I were sharing a sparkling water break. Systems thinking, we realized, is a through line in many parts of our work, and part of the mental model that we share that leads to effective change making in complex, adaptive systems. In the geekiest possible terms:

Systems analysis had been a feature of Josh’s dissertation, while I had had an opportunity to study a slightly more “quant” version of the same concepts under John Sterman at MIT Sloan, through my System Dynamics coursework. The more we thought about it, systems thinking and system dynamics were present across the team at FAS–from our brilliant colleague Alice Wu, who had recently given a presentation on Tipping Points, to folks who had studied the topic more formally as engineers, or as students at Michigan and MIT. This led to the first meeting of our FAS “Systems Thinking Caucus” and inspired a series of blog posts which intend to make this philosophical through-line more clear. This is just the first, and describes how and why systems thinking is so important in the context of entrepreneurship policy, and how systems modeling can help us better understand which policies are effective.
The first time I heard someone described as an “ecosystem builder,” I am pretty sure that my eyes rolled involuntarily. The entrepreneurial community, which I have spent my career supporting, building, and growing, has been my professional home for the last 15 years. I came to this work not out of academia, but out of experience as an entrepreneur and leader of entrepreneur support programs. As a result, I’ve always taken a pragmatic approach to my work, and avoided (even derided) buzzwords that make it harder to communicate about our priorities and goals. In the world of tech startups, in which so much of my work has roots, buzzwords from “MVP” to “traction” are almost a compulsion. Calling a community an “ecosystem” seemed no different to me, and totally unnecessary.
And yet, over the years, I’ve come to tolerate, understand, and eventually embrace “ecosystems.” Not because it comes naturally, and not because it’s the easiest word to understand, but because it’s the most accurate descriptor of my experience and the dynamics I’ve witnessed first-hand.
So what, exactly, are innovation ecosystems?
My understanding of innovation ecosystems is grounded first in the experience of navigating one in my hometown of Kansas City–first, as a newly minted entrepreneur, desperately seeking help understanding how to do taxes, and later as a leader of an entrepreneur support organization (ESO), a philanthropic funder, and most recently, as an angel investor. It’s also informed by the academic work of Dr. Fiona Murray and Dr. Phil Budden. The first time that I saw their stakeholder model of innovation ecosystems, it crystallized what I had learned through 15 years of trial-and-error into a simple framework. It resonated fully with what I had seen firsthand as an entrepreneur desperate for help and advice–that innovation ecosystems are fundamentally made up of people and institutions that generally fall into the same categories: entrepreneurs, risk capital, universities, government, or corporations.
Over time–both as a student and as an ecosystem builder, I came to see the complexity embedded in this seemingly simple idea and evolved my view. Today, I amend that model of innovation ecosystems to, essentially, split universities into two stakeholder groups: research institutions and workforce development. I take this view because, though not every secondary institution is a world-leading research university like MIT, smaller and less research-focused colleges and universities play important roles in an innovation ecosystem. Where is the room for institutions like community colleges, workforce development boards, or even libraries in a discussion that is dominated by the need to commercialize federally-funded research? Two goals–the production of human capital and the production of intellectual property–can also sometimes be in tension in larger universities, and thus are usually represented by different people with different ambitions and incentives. The concerns of a tech transfer office leader are very different from those of a professor in an engineering or business school, though they work for the same institution and may share the same overarching aspirations for a community. Splitting the university stakeholder into two different stakeholder groups makes the most sense to me–but the rest of the stakeholder model comes directly from Dr. Murray and Dr. Budden.

One important consideration in thinking about innovation ecosystems is that boundaries really do matter. Innovation ecosystems are characterized by the cooperation and coordination of these stakeholder groups–but not everything these stakeholders do is germane to their participation in the ecosystem, even when it’s relevant to the industry that the group is trying to build or support.
As an example, imagine a community that is working to build a biotech innovation ecosystem. Does the relocation of a new biotech company to the area meaningfully improve the ecosystem? Well, that depends! It might, if that company actively engages in efforts to build the ecosystem say, by directing an executive to serve on the board of an ecosystem building nonprofit, helping to inform workforce development programs relevant to their talent needs, instructing their internal VC to attend the local accelerator’s demo day, offering dormant lab space in their core facility to a cash-strapped startup at cost, or engaging in sponsored research with the local university. Relocation of the company may not improve the ecosystem if they simply happen to be working in the targeted industry and receive a relocation tax credit. In short, by itself, shared work between two stakeholders on an industry theme does not constitute ecosystem building. That shared work must advance a vision that is shared by all of the stakeholders that are core to the work.
Who are the stakeholders in innovation ecosystems?
Innovation ecosystems are fundamentally made up of six different kinds of stakeholders, who, ideally, work together to advance a shared vision grounded in a desire to make the entrepreneurial experience easier. One of the mistakes I often see in efforts to build innovation ecosystems is an imbalance or an absence of a critical stakeholder group. Building innovation ecosystems is not just about involving many people (though it helps), it’s about involving people that represent different institutions and can help influence those institutions to deploy resources in support of a common effort. Ensuring stakeholder engagement is not a passive box-checking activity, but an active resource-gathering one.
An innovation ecosystem in which one or more stakeholders is absent will likely struggle to make an impact. Entrepreneurs with no access to capital don’t go very far, nor do economic development efforts without government buy-in, or a workforce training program without employers.
In the context of today’s bevvy of federal innovation grant opportunities with 60-day deadlines, it can be tempting to “go to war with the army you have” instead of prioritizing efforts to build relationships with new corporate partners or VCs. But how would you feel if you were “invited” to do a lot of work and deploy your limited resources to advance a plan that you had no hand in developing? Ecosystem efforts that invest time in building relationships and trust early will benefit from their coordination, regardless of federal funding.
These six stakeholder groups are listed in Figure 2 and include:
- Entrepreneurs – Those who have started and are working to start new companies, including informal entrepreneurs, sole proprietors, small businesses, tech startups, university researchers considering or pursuing tech transfer, deep tech startups, manufacturing firms, service firms, and non-profit organizations that convene them and are accountable to them.
- Government – Public entities of all levels and branches, including, local, state, and federal government agencies and officials, as well as pseudo-governmental organizations, Councils of Governments (COGS) or Economic Development Districts (EDDs), economic development organizations and Chambers of Commerce (which may alternately be considered part of the corporations bullet, depending on their accountability structure), and public-private partnerships.
- Corporations – Large and established companies in a region that are relevant in their capacity as major employers, large-scale purchasers, pilot customers, sponsors of research, and potential strategic investors and acquirers of technology and innovation-driven companies. Corporations might also act in the classical definition of cluster development, providing fractional access to advanced equipment or capabilities that the scale of their cap-ex facilitates, to improve access to such facilities for smaller or newer companies with fewer assets to fund such investments.
- Workforce Development – The programs and capabilities in a community that produce a base of employees with the specific skills and competencies to support both growing and established companies, including K-12 systems and districts, educators, non-degree credential programs, professional training programs or job pipelines, skills-based development communities and meetups, regional workforce partnerships, community colleges, and colleges and universities of all kinds.
- Capital – Providers of private capital that supports the creation of commercial value in exchange for a return on investment, including venture capital, angel investors, angel networks, private equity investors, limited partners or institutional investors, as well as community banks, CDFIs, CDCs, other non-bank loan funds, fintechs, and providers of alternative financing such as factoring or revenue/royalty-based financing.
- Research Institutions – Organizations which conduct the basic and applied research from which deep tech businesses might be formed and begin the process of commercializing that research, including research universities and affiliated centers and institutes, research and teaching hospitals, private research institutions, national labs, Federally Funded Research and Development Centers (FFRDCs), and Focused Research Organizations.
In the context of regional, place-based innovation clusters (including tech hubs), this stakeholder model is a tool that can help a burgeoning coalition both assess the quality and capacity of their ecosystem in relation to a specific technology area and provide a guide to prompt broad convening activities. From the standpoint of a government funder of innovation ecosystems, this model can be used as a foundation for conducting due diligence on the breadth and engagement of emerging coalitions. It can also be used to help articulate the shortcomings of a given community’s engagements, to highlight ecosystem strengths and weaknesses, and to design support and communities of practice that convene stakeholder groups across communities.
What about entrepreneur support organizations (ESO)? What about philanthropy? Where do they fit into the model?
When I introduce this model to other ecosystem builders, one of the most common questions I get is, “where do ESOs fit in?” Most ESOs like to think of themselves as aligned with entrepreneurs, but that merits a few cautionary notes. First, the critical question you should ask to figure out where an ESO, a Chamber or any other shape-shifting organization fits into this model is, “what is their incentive structure?” That is to say, the most important thing is to understand to whom an organization is accountable. When I worked for the Enterprise Center in Johnson County, despite the fact that I would have sworn up-and-down that I belonged in the “E” category with the entrepreneurs I served, our sustaining funding was provided by the county government. My core incentive was to protect the interests of a political subdivision of the metro area, and a perceived failure to do that would have likely resulted in our organization’s funding being cut (or at least, in my being fired from it). That means that I truly was a “G,” or a government stakeholder. So, intrepid ESO leader, unless the people that fund, hire, and fire you are majority entrepreneurs, you’re likely not an “E.”
The second danger of assuming that ESOs are, in fact, entrepreneurs, is that it often leads to a lack of actual entrepreneurs in the conversation. ESOs stand in for entrepreneurs who are too busy to make it to the meeting. But the reality is that even the most well-meaning ESOs have a different incentive structure than entrepreneurs–meaning that it is very difficult for them to naturally represent the same views. Take for instance, a community survey of entrepreneurs that finds that entrepreneurs see “access to capital” as the primary barrier to their growth in a given community. In my experience, ESOs generally take that somewhat literally, and begin efforts to raise investment funds. Entrepreneurs, on the other hand who simply meant “I need more money,” might see many pathways to getting it, including by landing a big customer. (After all, revenue is the cheapest form of cash.) This often leads ESOs to prioritize problems that match their closest capabilities, or the initiatives most likely to be funded by government or philanthropic grants. Having entrepreneurs at the table directly is critically important, because they see the hairiest and most difficult problems first–and those are precisely the problems it take a big group of stakeholders to solve.
Finally, I have seen folks ask a number of times where philanthropy fits into the model. The reality is that I’m not sure. My initial reaction is that most philanthropic organizations have a very clear strategic reason for funding work happening in ecosystems–their theory of change should make it clear which stakeholder views they represent. For example, a community foundation might act like a “government” stakeholder, while a funder of anti-poverty work who sees workforce development as part or their theory of change is quite clearly part of the “W” group. But not every philanthropy has such a clear view, and in some cases, I think philanthropic funders, especially those in small communities, can think of themselves as a “shadow stakeholder,” standing in for different viewpoints that are missing in a conversation. Finally, philanthropy might play a critical and underappreciated role as a “platform creator.” That is, they might seed the conversation about innovation ecosystems in a community, convene stakeholders for the first time, or fund activities that enable stakeholders to work and learn together, such as planning retreats, learning journeys, or simply buying the coffee or providing the conference room for a recurring meeting. Finally, and especially right now, philanthropy has an opportunity to act as an “accelerant,” supporting communities by offering the matching funds that are so critical to their success in leveraging federal funds.
Why is “ecosystem” the right word?
Innovation ecosystems, like natural systems, are both complex and adaptive. They are complex because they are systems of systems. Each stakeholder in an innovation ecosystem is not just one person, but a system of people and institutions with goals, histories, cultures, and personalities. Not surprisingly, these systems of systems are adaptive, because they are highly connected and thus produce unpredictable, ungovernable performance. It is very, very difficult to predict what will happen in a complex system, and most experts in fields like system dynamics will tell you that a model is never truly finished, it is just “bounded.” In fact, the way that the quality of a systems model is usually judged is based on how closely it maps to a reference mode of output in the past. This means that the best way to tell whether your systems model is any good is to give it “past” inputs, run it, and see how closely it compares to what actually happened. If I believe that job creation is dependent on inflation, the unemployment rate, availability of venture capital, and the number of computer science majors graduating from a local university, one way to test if that is truly the case is to input those numbers over the past 20 years, run a simulation of how many jobs would be created, according to the equations in my model, and seeing how closely that maps to the actual number of jobs created in my community over the same time period. If the line maps closely, you’ve got a good model. If it’s very different, try again, with more or different variables. It’s quite easy to see how this trial-and-error based process can end up with an infinitely expanding equation of increasing complexity, which is why the “bounds” of the model are important.
Finally, complex, adaptive systems are, as my friend and George Mason University Professor Dr. Phil Auerswald says, “self-organizing and robust to intervention”. That is to say, it is nearly impossible to predict a linear outcome (or whether there will be any outcome at all) based on just a couple of variables. This means that the simple equation(money in = jobs out) is wrong. To be better able to understand the impact of a complex, adaptive system requires mapping the whole system and understanding how many different variables change cyclically and in relation to each other over a long period of time. It also requires understanding the stochastic nature of each variable. That is a very math-y way of saying it requires understanding the precise way in which each variable is unpredictable, or the shape of its bell-curve.
All of this is to say that understanding and evaluation of innovation ecosystems requires an entirely different approach than the linear jobs created = companies started * Moretti multiplier assumptions of the past.
So how do you know if ecosystems are growing or succeeding if the number of jobs created doesn’t matter?
The point of injecting complexity thinking into our view of ecosystems is not to create a sense of hopelessness. Complex things can be understood–they are not inherently chaotic. But trying to understand these ecosystems through traditional outputs and outcomes is not the right approach since those outputs and outcomes are so unpredictable in the context of a complex system. We need to think differently about what and how we measure to demonstrate success. The simplest and most reliable thing to measure in this situation then becomes the capacities of the stakeholders themselves, and the richness or quality of the connections between them. This is a topic we’ll dive into further in future posts.
We have the data to improve social services at the state and local levels. So how do we use it?
The COVID-19 pandemic laid bare for some what many already knew: the systems that our nation relies upon to provide critical social services and benefits have long been outdated, undersupported, and provide atrocious customer experiences that would quickly lead most private enterprises to failure.
From signing up for unemployment insurance to managing Medicaid benefits or filing annual tax returns, many frustrating interactions with government services could be improved by using data from user experiences and evaluating it in context with similar programs. How do people use these services? Where are customers getting repeatedly frustrated? At what point do these services fail, and what can we learn from comparing outcomes across different programs? Many agencies across the country already collect a huge amount of data on the programs they run, but fall short of adequately wielding that data to improve services across a wide range of social programs. Evaluating program data is necessary for providing effective social services, yet local and state governments face chronic capacity issues and high bureaucratic barriers to evaluating the data they have already collected and translating evaluation results into improved outcomes across multiple programs.
In a recent paper, “Blending and Braiding Funds: Opportunities to Strengthen State and Local Data and Evaluation Capacity in Human Services,” researchers Kathy Stack and Jonathan Womer deliver a playbook for state and local governments to better understand the limitations and opportunities for leveraging federal funding to build better integrated data infrastructure that allows program owners to track participant outcomes.
Good data is a critical component of delivering effective government services from local to federal levels. Right now, too much useful data lives in a silo, preventing other programs from conducting analyses that inform and improve their approach – state and local governments should strive to modernize their data systems by building a centralized infrastructure and tools for cross-program analysis, with the ultimate goal of improving a wide range of social programs.
The good news is that state and local governments are authorized to use federal grant money to conduct data analysis and evaluation of the programs funded by the grant. However, federal agencies typically structure grants in ways that make it difficult for states and localities to share data, collaborate on program evaluation, and build evaluation capacity across programs.
Interviews with leading programs in Colorado, Indiana, Kentucky, Ohio, Rhode Island, and Washington revealed a number of different approaches that state and local governments have used to build and maintain integrated data systems, despite the challenges of working with multiple government programs. These range of approaches include: adopting a strong executive vision, working with external partners (such as research groups and universities), investing in building up a baseline capacity that enables higher level analytic work, delivering crucial initial analysis that motivated policy makers to deliver direct state funding, and (most notably) figuring out how to braid and blend funds from multiple federal grant sources. The programs in these states prove that it is possible to build a centralized system that evaluates outcomes and impacts across a range of government services.

As data makes its way through an IDS, it is cleaned, verified, and matched with other data.
Stack and Womer lay out their menu of recommended options that states and localities can pursue in order to access federal funding for building data and evaluation capacity. These options include:
- stimulus funding from the American Rescue Plan’s State and Local Fiscal Recovery Fund and the Infrastructure Investment and Jobs Act;
- program-specific funding that funds centralized capacity;
- direct state or local appropriations;
- funding on a project by project basis;
- cost allocation billing plans; and
- hybrid funding models.
The authors advocate for states and localities to both blend funds and braid funds, when appropriate, in order to fully leverage federal funding opportunities. Blended funds are sourced from multiple grants but lose their distinction upon blending; this type of federal funding requires statutory authority, and may have uniform reporting requirements. Alternatively, braided funds also come from separate sources, but remain distinct within the braided pot, with the original reporting, tracking, and eligibility requirements preserved from each source. Financing projects and programs via braiding funds is far more time-consuming, but it does not require special statutory authority.
While states and localities can strengthen and expand integrated data systems alone, the federal government should also take important steps to accelerate state and local progress. Stack and Womer point out a number of options that do not require legislative action. For example, the Office of Management and Budget (OMB) and other federal agencies could issue clear guidance that recipients of federal grants must build and maintain efficient data infrastructure and analytics capacity that can support cross-program coordination and shared data usage. Regulatory and administrative actions like this would make it easier for states and localities to finance data systems via blending and braiding federal funds.
Integrated data systems are increasingly important tools for governments to achieve impact goals, avoid redundancy, and keep track of outcomes. State and local governments should take a page from Stack and Womer’s playbook and seek creative ways of using federal grants to build out existing data infrastructure into a modern system that supports cross-program analysis.
How do you clean up 170 million pieces of space junk?
In March, NASA released the most comprehensive financial analysis on space debris. For the first time, this report illuminates the financial costs and benefits of various paths forward to combat one of the fastest-growing dangers in Earth’s orbit.
The space economy is enormous, but one of its biggest challenges is tiny: space debris, where a collision with an object the size of even a nickel can cause catastrophic damage. More objects are being placed into orbit now than at any point in history. This increases the chance of collisions between satellites and existing debris. There have been varying approaches to managing and mitigating debris, ranging from legislative/regulatory efforts to technological ones.
With increased activity in space, debris is a growing threat to Low Earth Orbit (LEO), the most accessible area of space. There may be as many as 170 million pieces of debris in orbit, with the vast majority too small to track due to limits in current technology, but no less dangerous. Of the 55,000 pieces of debris that we can track, more than 27,000 objects, like spent rocket boosters, active satellites, and dead satellites, are monitored by the Department of Defense’s global Space Surveillance Network (SSN).
Due to the speed at which objects move in LEO (around 17,000 mph), the impact of even a small object, like a ping pong ball, can cause significant damage or completely shatter existing infrastructure, producing more fragments of trackable and detectable size. Twice in the last month, the International Space Station had to perform maneuvers to dodge collisions. Besides immediate LEO congestion, the risk of Kessler Syndrome, in which current debris creates a growing and self-replicating cascade of orbital junk, is also a growing possibility. Political leaders have begun to pay attention: Sen. John Hickenlooper (D-CO)–one of the leaders in Congress on this issue–has said “Because of the threats from debris already in orbit, simply preventing more debris in the future is not enough.”
Methods of Mitigation
Technological efforts to limit debris include making reusable rockets and maneuverable satellites. Certain satellites can adjust their position through a satellite operator, a person or entity that manages a satellite. For example, the International Space Station performed what is called an in-orbit maneuver to dodge debris. To meet the needs of clean-up, industries have developed debris-cleaning tech like ground laser nudges, space tugs, and space lasers. Policy has not kept pace with the rapid growth of the emerging commercial space industries. Industries are also hesitant to use and effectively implement new technologies because costs have been uncertain.
There has never been a comprehensive cost-benefit analysis of debris clean-up (remediation) methods despite robust data on the number of objects in space being available. This new NASA analysis provides the cost of tech and the time to recover the costs, giving industries a better idea of how to implement new technologies effectively.
To create the report, NASA scientists created a model that specified the economic risks space debris imposed on satellite operators based on the time it takes to match the cost put into clean up, and the method of cleanup used. Scientists then applied the model to two scenarios: prioritizing large debris breakdown and debris removal (aka getting rid of the top 50 largest and most-concerning objects in space) and targeting small debris removal (eliminating 100,000 pieces of debris from 1–10 cm in size).
Different Methods of Debris Management Technology
Debris Management Method | Application to Debris Size | Description | Estimated Cost (Low) | Estimated Cost (High) | Development Costs |
---|---|---|---|---|---|
Tug for Controlled Reentry | Large (≥10 cm ) | Catch an object and adjust its orbit so it re-enters the atmosphere at a specific angle to concentrate debris falloff in a concentrated area. | ~$4,000 per kilogram | ~$60,000 per kilogram | n/a |
Tug for Uncontrolled Reentry | Large (≥10 cm) | Catch an object and adjust its orbit so it re-enters the atmosphere freely with no predesignated fall area and unclear reentry timing. | ~$3,000 per kilogram | ~$40,000 per kilogram | n/a |
Ground Laser Nudges | Large (≥10 cm), Small debris (1 cm–10 cm) | Uses a laser to move an object without physical contact from the surface of the Earth. Requires a lot of energy. | ~$300 per kilogram | ~$6,000 per kilogram | ~$600 million |
Space Laser Nudge | Large (≥10 cm), Small debris (1 cm–10 cm) | Uses a laser to move an object without physical contact from space. Uses less energy from ground-based lasers since much of the energy won’t be lost going through the atmosphere. | ~$300 per kilogram | ~$3,000 per kilogram | ~$300 million |
Just-in-time collision avoidance (JCA) via Laser Nudges | Large (≥10 cm) | Used to prevent predicted collisions between large pieces of orbital debris, like satellites and debris by informing laser nudges. | Between $6 for 100 kg object- $500 for 9,000 kg object per maneuver | Between $700 for 100 kg object- $60,000- for 9,000 kg object per maneuver | n/a |
Just-in-time collision avoidance (JCA) via Rapid Response Rockets | Large (≥10 cm) | Used to prevent predicted collisions between large pieces of orbital debris, like satellites and debris by informing Rapid Response Rockets(RRR). These rockets would meet with specific debris and alter the target debris’ orbit. | $30 million per nudge | $60 million per nudge | n/a |
Physical Sweeping | Large (≥10 cm), Small debris (1 cm–10 cm) | Directly impacting debris to move or relocate. | $90,000 per kilogram | $900,000 per kilogram | $90,000 million |
Recycling Debris | Large (≥10 cm) | Gathering and processing debris and processing it in space to use as fuel or other utilities. | ~$1.4 billion at 15,000/kg | n/a | n/a |
Three Key Findings
Finding 1. To reduce operator risks, small debris should be removed, and large debris should be nudged to prevent collisions.
Even though it is initially expensive, removing small debris would produce a net benefit in under a decade:

Initial investment can be made up quickly and have a large impact.

Initial investment can be made up quickly and have a large impact.
NASA’s models indicate that debris removal efforts for non-trackable debris can have immediate benefits. For trackable debris, it would take just 3-4 years to make up initial costs.
Finding 2. Spacecraft operators can recover the initial upfront cost quickly using reusable technologies that clean up debris using controlled and uncontrolled reentry.

The benefit associated with removing large objects grows every year after they are remediated.
For the 50 largest objects in space, which can be effectively removed using controlled re-entry, especially when done using reusable vehicles, cost recovery would be seen in around three decades.
Finding 3. Recycling space debris does not provide overwhelmingly clear enough financial benefits over other debris cleanup methods.
While there are potential economic and climate benefits to recycling space debris, recycling in space reduces the risk of harmful chemicals being released into the upper atmosphere as it burns upon reentry and limits the amount of debris remnants in the upper atmosphere.
Investing in debris recycling facilities has a large upfront cost, and it is not guaranteed that a market for such facilities will emerge in the next decade. This makes projections for the value of recycling uncertain. The report indicates, however, that debris recycling is a potential solution to long-term efforts of debris management. This can be done through in-space manufacturing and assembly (ISAM), a practice that involves factories and utility services in space and covers servicing, assembly, and manufacturing. These facilities can be used to collect and recycle billions of dollars worth of space debris and help create a “circular space economy” to process, recycle, build, and refuel space infrastructure using existing debris.
Three Solutions
We have ideas that have already been contributed to address space debris. A Day One Project contributor Lyndsey Grey outlined five policy solutions to space debris remediation. Highlighted below are the three most relevant ones below:
Recommendation 1. NASA’s Orbital Debris Program Office (ODPO), in coordination with the DOD’s Space Surveillance Network, should create a prioritized list of massive space debris items in LEO for expedited cleanup.
This is a strong start. Creating a list of large debris (>10 cm) by impact and prioritizing nudging large debris like non-functioning satellites, spent rocket stages, and other large debris using ground lasers will allow increased benefits for less cost. Additionally, NASA should prioritize destroying non-trackable and other small debris.
Their report finds that remediating smaller debris not only demonstrates results faster but is a lighter financial lift. Surveying debris size and impact can be done alongside removing smaller debris, maximizing impact.
Recommendation 2. The Space Force, in collaboration with the Department of Commerce (DOC), should fund removal and/or recycling of a set number of large debris objects each year, thereby creating a reliable market for space debris removal.
We recommend that the nascent Space Force and Department of Commerce provide funding for technical solutions to remove and recycle larger debris objects. If we are to tackle the space debris problem at all, we need funding. The amount of large debris is already extensively cataloged. This is doable with the completion of a trackable list of large debris from Recommendation 1.
NASA’s report provides cost information for energizing emerging space industries to start investing in debris removal tools and infrastructure to maximize impact. NASA states the projected upfront/upkeep costs of recycling debris along and the costs of removing large debris. Since more money can be saved through nudges and can still meaningfully prevent collisions, this can provide room for developing ISAM capabilities for recycling.
Recommendation 3. NOAA’s Office of Space Commerce, in conjunction with the Space Force and NASA’s ODPO, should jointly issue an annual research report outlining risk, cost-benefit analyses, and the economics of orbital debris removal and recycling.
Having a regular cost-benefit analysis can help businesses assess the scope of their own recycling and space debris cleanup efforts. NASA’s cost-benefit analysis is aligned with the intention of this recommendation. NASA’s report also serves as a good foundation for future recurring analysis.
What Next?
Space debris isn’t going to go away, but we can start minimizing the threat it poses.
The NASA report indicates that taking action immediately will have minimal financial drawbacks, with a high debris-cleaning impact within a few years. Technologies like ground and space laser nudges provide low-cost alternatives to other debris mitigation methods currently in use. The report also provides insight into industries’ understanding of the true financial costs associated with cleaning space debris. This can incentivize innovation and create even more cost-effective technologies to manage and clean up debris. There is also an immediate need to address the space debris problem: existing U.S. government and commercial infrastructure (the International Space Station and commercial internet and science satellites) is at risk. The faster space debris is addressed, the more space innovation and invention we will see in the coming decades.
The bold vision of the CHIPS and Science Act isn’t getting the funding it needs
Originally published May 17, 2023 in Brookings.
The legislative accomplishments of the previous session of Congress have given advocates of more robust innovation and industrial development investments much to be excited about. This is especially true for the bipartisan CHIPS and Science Act (CHIPS), which committed the nation not just to compete with China over industrial policy and talent, but to advance broad national goals such as manufacturing productivity and economic inclusion while ramping up federal investment in science and technology.
Most notably, CHIPS authorized rising spending targets for key anchors of the nation’s innovation ecosystem, including the National Science Foundation (NSF), the Department of Energy’s Office of Science, and the National Institute of Standards and Technology (NIST). In that regard, the act’s passage was a breakthrough—including for an expanded focus on place-based industrial policy.
However, it’s become clear that this breakthrough is running into headwinds. In spite of ongoing rhetorical support for the act’s goals from many political leaders, neither the FY 2023 Consolidated Appropriations Act nor the Biden administration’s FY 2024 budget request have delivered on the intended funding targets. This year’s omnibus funding remained nearly $3 billion short of the authorized levels for research agencies, while the 2024 budget request undershoots agency targets by over $5 billion. And with the debt ceiling crisis coming to a head this month—and House legislation on the table that would substantially roll back federal spending—it’s even harder to be optimistic about the odds of fulfilling the CHIPS and Science Act’s vision of resurgent investment in American competitiveness.
Instead, delivery on the CHIPS and Science Act paradigm can only be fractional as of now, with a $3 billion (and growing) funding gap for research and less than 10% of the five-year place-based vision funded to date.
All of which underscores how much work remains to be done if the nation is going to deliver on the promise of a rejuvenated innovation and place-based industrial strategy. Leaders need to make an energetic and bipartisan reassertion of the CHIPS vision without delay if the government is to truly follow through on its bold promises.
CHIPS has a broad, Innovative policy menu to support renewed American Competitiveness
Recently, Rep. Frank Lucas (R-Okla.), chair of the House Committee on Science, Space, and Technology, rightly pointed out that the “science” portion of the CHIPS and Science Act (i.e., separate from its subsidies for semiconductor factories) will be “the engine of America’s economic development for decades to come.” One way the act seeks to achieve this is by creating the Directorate for Technology, Innovation and Partnerships at NSF, and focusing it on an evolving set of technological and social priorities (see Tables 1a + 1b). These won’t just drive NSF technology work, but will guide the development of a more concerted whole-of-government strategy.
Table 1b: Societal, national, and geostrategic challenges | |
---|---|
U.S. national security | Climate change and sustainability |
Manufacturing and industrial productivity | Inequitable access to education, opportunity, services |
Workforce development and skill gaps |
In light of these priorities, it’s no mistake that Congress placed the NSF, the Energy Department’s Office of Science, NIST, and the Economic Development Administration (EDA) at the core of the “science” portion of the act. The first three agencies are major funders of research and infrastructure for the physical science and engineering disciplines that undergird many of these technology areas. The EDA, meanwhile, is the primary home for place-based initiatives in economic development.
Meanwhile, in keeping with the larger strategy of countering the nation’s science and technology drift, Congress adopted five years of rising “authorizations” for these core innovation agencies. However, it bears remembering that these authorizations are not actual funding, but multiyear funding targets that, if fully funded year by year, would result in an aggregate budget doubling. In short, Congress has declared that the national budget for science and technology should go up, not down, over the next five years.
It’s also worth noting that the act seeks to boost investment in many different areas, including:
- Fundamental science and curiosity-driven research funded by science agencies at federal labs, universities, and companies.
- Use-inspired research, translation, and production to expand the ability of federal agencies to invest in emerging technology, enter partnerships, and drive manufacturing innovation.
- STEM education and workforce development to create or expand programs to foster opportunities and up-skilling.
- Research facilities and instrumentation at national labs and universities across the country, including modernization of aging research infrastructure.
- Regional innovation to broaden the nation’s innovation map.
The upshot: Supporters are not wrong in seeing the CHIPS and Science Act as a major moment of aspiration for U.S. innovation efforts and ecosystems.
Government Appropriations are falling short on CHIPS funding by billions of dollars
Yet for all the act’s valuable programs and focus areas, not all is well. As of now, there have been two rounds of proposed or adopted funding policy for CHIPS research agencies—and the results are mixed to disappointing as details a new funding update on the CHIPS and Science Act from the Federation of American Scientists.
The first funding round was the FY 2023 omnibus package Congress adopted last December. There, the aggregate appropriations for the NSF, Office of Science, and NIST amounted to $2.7 billion—a 12% shortfall below the aggregate FY 2023 target of $22.4 billion.
Table 2: Major research agency appropriations vs. CHIPS authorizations
CHIPS FY23 Authorizations | FY23 Omnibus Appropriation* | Difference ($M) | Difference (%) | CHIPS FY24 Authorizations | FY24 OMB Budget | Difference ($M) | Difference (%) | |
---|---|---|---|---|---|---|---|---|
National Science Foundation | $11,897 | $9,874 | ($2,023) | -17.0% | $15,647 | $11,314 | ($4,333) | -27.7% |
DOE Office of Science | $8,902 | $8,100 | ($802) | -9.0% | $9,542 | $8,800 | ($742) | -7.8% |
National Institute of Standards & Technology | $1,551 | $1,654 | $103 | 6.6% | $1,652 | $1,632 | ($20) | -1.2% |
Totals | $22,351 | $19,628 | ($2,723) | -12.2% | $26,840 | $21,746 | ($5,095) | -19.0% |
Dollars in millions | *FY23 omnibus figures include NIST earmarks and supplemental NIST and NSF spending for CHIPS and Science activities |
Then, in March, amid what was already a yawning funding gap, the White House released its FY 2024 budget proposal. That proposal would have the three CHIPS research agencies falling further behind: $5.1 billion, or 19% below the act’s authorization.
In both the omnibus and the budget, NSF funding was the biggest miss. This can be divided into a few segments:
- Core research directorates. Most NSF science research is channeled through six research directorates that focus on biology, computing and information science, engineering, geoscience, math and computer science, or social science, alongside offices focused on multiple crosscutting activities. This research lays a foundation for innovative advances and funds several mechanisms for industrial research partnerships, roughly in line with the CHIPS and Science Act’s broader industrial innovation goals. Funding for these collective activities stood at about $591 million (8% below the authorized level) in FY 2023 and $846 million (10% below the authorized level) in the FY 2024 budget request.
- Directorate for Technology, Innovation and Partnerships (TIP). This new directorate established in CHIPS is meant to support translational, use-inspired, and solutions-oriented research and development through a variety of novel modes and models, including the NSF’s Regional Innovation Engines (more on these below), translation accelerators, entrepreneurial fellowships, and test beds. Authorizers set a TIP funding target of $1.5 billion in FY 2023 and $3.4 billion in FY 2024—the most ambitious CHIPS appropriations targets by far. However, actual funding was $620 million short in FY 2023 and $2.2 billion short in the FY 2024 budget request.
- STEM education. The NSF’s Directorate for STEM Education houses activities across K-12 education, tertiary education, informal learning settings, and outreach to underserved communities. CHIPS authorized boosts for multiple directorate programs, including Graduate Research Fellowships, Robert Noyce Teacher Scholarships, and CyberCorps Scholarships, while establishing new Centers for Transformative Education Research and Translation to conduct education research and development. Collectively, these STEM education activities fell $579 million short of their $1.4 billion authorized level in the FY 2023 omnibus, and $1.1 billion short in the FY 2024 budget request.
With these shortfalls at NSF and other agencies, it will be difficult for federal science and innovation programs to have the transformative impact that CHIPS envisioned.
Funding for place-based industrial policy programs is also coming up short
In addition to decreased agency support, actual funding for what we call the “place-based industrial policy” in the CHIPS and Science Act is also coming up short, by even greater relative margins. Where the agency research funding gaps are a substantial restraint on innovative capacity, the diminished place-based funding is an out-and-out emergency.
These programs are important because after years of uneven economic progress across places, CHIPS saw Congress finally accelerating large-scale, direct investments to unlock the innovation potential of underdeveloped places and regions. Thanks to some of those investments, including several new challenge grants, scores of state and local leaders across the country have thrown themselves headlong into the design of ambitious strategies for building their own innovation ecosystems.
Yet for all of the legitimate excitement and interest of stakeholders in literally every state, the numbers that permit actual implementation are not all good. Looking at several of the most visible new place-based programs, the funding news is so far mixed to outright disappointing.
- Regional Technology and Innovation Hubs: Authorized at $10 billion over five years, the program received just $500 million in the FY 2023 omnibus—one-quarter of its authorized level for the year. This has greatly limited the resources available to the EDA for “development” grants to build out the program’s 20 forecasted hubs. Currently, the EDA is planning to make only five to 10 much smaller development grants instead of the authorized 20 very large grants, with more uncertainty ahead. Meanwhile, a $4 billion request in the president’s FY 2024 budget for mandatory funding outside the normal appropriations process (as opposed to discretionary spending, which is funded through annual spending bills) faces long odds.
- Regional Innovation Engines: This NSF program received $200 million in FY 2023 appropriations, and would receive $300 million under the FY 2024 request. It was authorized somewhat differently than other CHIPS line items, receiving a joint $6.5 billion authorization over five years for the Engines along with NSF’s newly authorized Translation Accelerators program. If one counts $3.25 billion as the five-year Engines authorization, then the program has received only about 6% of its authorization so far, or 15% if it receives the FY 2024 request level.
- Distressed Area Recompete Pilot Program: This EDA program—designed to deliver grants to distressed communities to connect workers to good jobs—is a relative bright spot funding-wise. Authorized at $1 billion over the FY 2022 to FY 2026 period, the program received its full $200 million in FY 2023 and has secured the same amount in the FY 2024 request. With that said, the program could still be under threat if the debt ceiling face-off leads to spending cuts.
Table 3: Placed-based innovation authorized in CHIPS and Science Act
Program | What It Does | CHIPS and Science Authorizations | Appropriation So Far | FY24 OMB Budget | Percent of Authorization Funded To Date |
---|---|---|---|---|---|
EDA Regional Technology and Innovation Hubs | Planning grants to be awarded to create regional technology hubs focusing on technology development, job creation, and innovation capacity across the U.S. | $10 billion over five years | $500 million | $48.5 million discretionary; $4 billion mandatory | 5% |
EDA Recompete Pilot Program | Investments in communications with large prime age (25-54) employment gaps | $1 billion over five years | $200 million | $200 million | 20% |
NSF Regional Innovation Engines | Up to 10 years of funding for each Engine (total ~$160 million per) to build a regional ecosystem that conducts translatable use-inspired research and workforce development | $3.25 billion* over five years | $200 million | $300 million | 6% |
NIST Manufacturing Extension Partnership | A network of centers in all 50 states and Puerto Rico to help small and medium-sized manufacturers compete | $575 million | $188 million | $277 million | 68% |
NIST Manufacturing USA | Program office for nationwide network of public-private manufacturing innovation institutes | $201 million | $51 million | $98 million | 53% |
Totals (including MEP and M-USA FY23 authorizations) | $15 billion | $1.1 billion | 8% | ||
* The NSF Regional Innovation Engines is assumed to have received 50% of a $6.5 billion CHIPS and Science Act provision that also authorized the Translation Accelerators program |
Besides these new CHIPS programs, two established mainstays of place-based development in the manufacturing domain are also facing funding challenges.
- NIST Hollings Manufacturing Extension Partnership: This program was slated for sizable boosts, with a $275 million authorization in FY 2023 and $300 million in FY 2024. The FY 2023 appropriation ended up $87 million short, while the FY 2024 request seeks a degree of catch-up, to within $23 million of the authorization. The request would support the National Supply Chain Optimization and Intelligence Network, to be established in FY 2023, and expand workforce up-skilling, apprenticeships, and partnerships with historically Black colleges and universities, minority-serving institutions, and community colleges.
- NIST Manufacturing USA: This program received $51 million in FY 2023 (about half of what was authorized), while the FY 2024 request again gets closer to the authorization, at $98 million. In FY 2024, NIST seeks to establish Manufacturing USA test beds, support a new NIST-sponsored institute to be completed in FY 2023, and further assist small manufacturers with prototyping and scaling of new technologies. As with all FY 2024 initiatives, outcomes depend partly on how tough the debt ceiling deal is for annual appropriations.
Overall, the current and likely future funding shortfalls facing many of the nation’s authorized place-based investments appear set to diminish the reach of these programs.
Should funding for critical technology areas be mandatory?
The CHIPS and Science Act establishes a compelling vision for U.S. innovation and place-based industrial policy, but that vision is already being hampered by tight funding. And now, the looming debt ceiling crisis is only going to make the situation worse.
Nor are there any silver bullets to resolve the situation. Somehow, Congress has to keep in sight the long-term vision for U.S. economic and military security, and find the political will to make the near-term financial commitments necessary for U.S. innovators, firms, and regions.
But it’s not just up to Congress. As we’ve seen, the White House budget also contains sizable funding shortfalls for research agencies. Federal agencies and the Office of Management and Budget will be formulating their FY 2025 budgets this summer in preparation for release next year. As they do so, they should prioritize long-term U.S. competitiveness across strategic technology areas and geographies more so than they have to date.
Lastly, while the mandatory spending proposal mentioned above for the Regional Technology and Innovation Hubs program may not get anywhere this year, mandatory funding as a mechanism for science and innovation investment is not a bad idea in principle. Nor is this the first time policymakers have pitched such an idea: The Obama administration attempted to make aggressive use of mandatory spending to supplement its base research and development requests, and congressional leaders have also floated the idea in recent years. Given the long-term nature of science and innovation, sustained and predictable support would be a boon, and a mandatory funding stream could provide much-needed stability.
Given all this, the moment may be approaching try again to leverage mandatory funding of innovation programs. With caps on discretionary spending on the horizon but bipartisan support for the CHIPS technology agenda still in place, the time to consider a mandatory funding measure may have arrived. Such a measure—structured by, say, a “Critical Technology and National Security Fund”—would go a long way toward ensuring more sustained, stable support for critical technologies in economic and military security. This is exactly the kind of support that CHIPS provides for the semiconductor industry, which is far from the only advanced technology sector subject to global competition.
In short, as we enter the summer months and face down a looming budget crisis, Congress should do for the “science” part of its watershed bill what it did with the “chips” part. Leaders in Washington must move now to ensure that we can deliver on the commitments set forth in the CHIPS and Science Act—all of them.
What Should Come Next for the NSF Innovation Engines Communities? (And What About Those That Just Missed Out?)
The U.S. National Science Foundation (NSF) announced the inaugural NSF Regional Innovation Engines program awards last week, providing an unprecedented opportunity for communities across the United States. The Development awards, also called Type-1 awards, aim to create fertile soil for larger innovation ecosystems to grow. Each team will receive up to $1 million over a two-year period, and the opportunity to apply to become a Type-2 Engine at the end of those two years. Type-2 Engines can receive up to $160 million over ten years. Over 46 states and territories are represented, and Engines are innovating across all the major critical technology areas including:
- 15 proposals around Artificial Intelligence (AI)
- 10 proposals contributing to Semiconductor and High-Powered Computing (HPC) technologies
- 5 proposals related to Quantum Information Science and Engineering (QISE)
- 32 proposals related to advanced manufacturing
- 23 related to energy and clean energy
Read more about them and check out the NSF’s breakdown of awards here.
With the potential to transform the nation’s competitiveness, the NSF Engines program paves the way for future innovation and growth following the vision of the CHIPS and Science Act of 2022. While the bipartisan cluster-building approach of the Engines program is similar to last year’s Build Back Better Regional Challenge and the newly announced Tech Hubs program, there are some key differences. First, the scope of the preliminary awards is much smaller. Second, the focus is on seeding ecosystems that have potential, rather than investing in ecosystems that have already demonstrated unique competitive outcomes. Third, this program specifically focuses its attention on groups new to government funding and on geographically and socially/economically diverse groups.
For teams that won awards
Congratulations!! Your hard work has paid off! This should be the first step on a journey towards growing an innovation ecosystem that will reshape the trajectory of your economic growth and set up emerging, globally competitive industries. This, however, is no time to rest on your laurels–in fact, preparation for your future Type 2 application starts today. Here are three things you can do to ensure your plan has a better chance of turning into reality:
Celebrate and acknowledge the achievement
This is a significant accomplishment and your community should be proud! Take the time to celebrate your team’s hard work and dedication. Share the news with your organization, partners, and community, spreading the enthusiasm and generating positive momentum. Post it on LinkedIn! Issue a press release! Hold a launch party! In a field in which the work never ends, we seldom take time to celebrate success–this is a great opportunity to pause and acknowledge the work that your partners and collaborators have done to form this coalition! It’s also a great way to get your broader community excited about the work to come.
Strengthen partnerships and collaborations with other stakeholders
The NSF Engines program emphasizes the power of collaboration and partnerships. Capitalize on your momentum by actively engaging with regional partners, including other research institutions, workforce groups, capital providers, government officials, corporate partners and entrepreneurs. If your Engines coalition leaves out any of the elements illustrated in the diagram below, one of the best ways you can prepare for the challenging work ahead is to broaden your inner circle. By leveraging diverse expertise and resources, you can create an ecosystem that amplifies the impact of your NSF Engine award–turning this from a proposal to build research capacity into a full-ecosystem approach.

Adapted from: Phil Budden and Fiona Murray. “An MIT Approach to Innovation: Ecosystems, Capacities, & Stakeholders.” MIT Lab for Innovation Science and Policy, October 2019.
Type 1 awards are led, for the most part, by universities or non-profits close to the research bench. Some of them incorporate partnerships with local workforce development groups or government engagement, but not all of them. For a development award to grow into a fully-fledged innovation ecosystem, you’ve got to work on building out the connective tissue between the stakeholders that you have yet to engage.
Reflect on what extra help you need
One of the innovative aspects of the NSF Engines program lies in just how much information is available about other awardees and the work they propose. Spend some time reviewing the plans your peers have made, and consider what great ideas might inspire your future work. Reflect, outside of the pressure of an application timeline: What aspects of work did you forget to include? Where might you need to make bigger investments to realize your coalition’s potential? Are there competencies or skills that are missing in your leadership team? In short–where do you still need help? A robust network of partners who have been engaged in ecosystem building across different industries and communities are competing right now for the opportunity to help you, as a part of the Engines Builder Platform. Spending some time in reflection now can help you prepare to tap into these resources as soon as they are available–saving time, and ensuring you put your award to its best uses.
For the teams that didn’t win Type 1 awards
Chances are, you put just as much time and thought into your application as the winners did. In the competitive funding of ecosystem building, what sets great communities apart is the breadth of their outreach, the quality of their commitments, and their ability to sustain a movement in good times and bad. Now is the most important time to show your determination and belief in the ecosystem your city can build! Here are a few things to make sure that all of the work that went into your application doesn’t simply disappear.
Secure your matching commitments
If you already started to engage funders in your community, now is a great time to schedule a conversation about what the work looks like moving forward. If you were able to raise matching funds or gather organizational commitments in support of your work, circle back to make sure that those commitments still stand. A little bit of perseverance in the face of adversity can do wonders in helping supportive partners feel a sense of confidence in your work–with or without federal funds.
Rally the troops
Your partners might be discouraged today, but the only thing that has changed in what you proposed is a little bit of federal money. Think of all of the political barriers you moved out of the way, the relationships you built, and the plans you clarified! Your community’s needs and your country’s needs have not changed in the last week. Now is a great time to remind partners of what is at stake–and encourage their continued engagement.
DON’T recycle your Engines application for Tech Hubs
It might be tempting to look at the work that your community did to support this application and simply find and replace “Engines” with “Tech Hubs.” There’s nothing legally preventing you from doing this, but such an approach is unlikely to be successful. The expectations, activities, and qualifications are fundamentally different between the Engines and Tech Hubs programs. Engines were meant to propose a “from scratch” solution, while the Tech Hubs program is looking for a recipe ready for your next big family BBQ. While your coalition relationships might help you prepare for the next application, you’ll need to think differently about your ecosystem’s strengths and weaknesses to be successful–not just slap a new title on your old word document.
Conclusion
Whether you did or didn’t win an NSF Engine Type 1 award, your hard work and dedication to your community is to be commended. Simply fielding an application at this scale takes a significant commitment of time, expertise, and partnership. Embrace this transformative journey and unleash the power of innovation within your region.
This is just one of many opportunities to build your regional innovation ecosystem that are yet to come. And in fact, another great opportunity to build your community was announced today, in the Tech Hubs NOFO. While the nature of the work this next opportunity will fund is similar in theme, it is very different in application. As a result, winning this Engine grant doesn’t guarantee you a Tech Hub, and losing it doesn’t have any bearing on your Tech Hub prospects. Whether your work was funded this week, or remains to be funded in the future, announcements like these shouldn’t be seen as either finish lines or stop signs. There is both more work and more possibilities ahead for all communities trying to build a better economic future for themselves and for the country.
CHIPS and Science Funding Update: FY 2023 Omnibus, FY 2024 Budget Both Short by Billions
See PDF for more charts.
When Congress adopted the CHIPS and Science Act (P.L. 117-167) in 2022 on bipartisan votes, it was motivated by several concerns and policy goals. A major overarching theme is the global competition for technology and prominence in the knowledge economy, and the place of the United States in it. More broadly, Congress also sought to improve the ability of federal agencies to invest in R&D to create solutions for national challenges. To that end, the Act took a broad array of policy steps well beyond semiconductors: providing strategic focus for the federal technology enterprise, creating programs to invest in U.S. workers and regions, expanding the funding toolkit, and authorizing sizable boosts for R&D across the spectrum.
But neither the FY 2023 Consolidated Appropriations Act nor the Biden Administration’s FY 2024 budget request have managed to keep up with the agency funding commitments established in the act. FY 2023 omnibus funding was nearly $3 billion short of the authorized targets for the National Science Foundation, the Department of Energy’s Office of Science, and the National Institute of Standards and Technology. The FY 2024 request for these agencies is over $5 billion short (see graph below).
This report provides a detailed breakdown of accounts and programs for these agencies and compares current funding levels against those authorized by CHIPS and Science. The report is intended to serve as a reference and resource for policymakers and advocates as the FY 2024 appropriations cycle unwinds.

Based on agency and legislative data and the FY 2024 budget. | Federation of American Scientists
CHIPS and Science Background
As mentioned above and covered more fully below, CHIPS and Science took manifold steps to strengthen the U.S. science and technology enterprise. A conceptual throughline in the Act is the establishment of key technology focus areas and societal challenges defined in Section 10387, shown in the table below. While not the only priorities for the federal R&D enterprise, these focus areas provide a framework to guide certain investments, particularly those by the new NSF technology directorate.
These key technology areas are also relevant for long-term strategy development by the Office of Science and Technology Policy and the National Science and Technology Council, as directed by CHIPS and Science. Several of the technology areas also appear on the Defense Department’s Critical Technologies list.
>> Key Technology Focus Areas >> | |
AI, machine learning, autonomy* | Advanced communications and immersive technologies* |
Advanced computing, software, semiconductors* | Biotechnology* |
Quantum information science* | Data storage and management, distributed ledger, cybersecurity* |
Robotics, automation, advanced manufacturing | Advanced energy technology, storage, industrial efficiency* |
Natural / anthropogenic disaster prevention / mitigation | Advanced materials science* |
* Also related to OUSD(R&E)-identified Defense Critical Technology Area |
|
>> Societal / National / Geostrategic Challenges >> | |
U.S. national security | Climate change and sustainability |
Manufacturing and industrial productivity | Inequitable access to education, opportunity, services |
Workforce development and skills gaps | |
Adapted from H.R. 4346, Sec. 10387 |
While much of the focus has been on semiconductors, the activities covered in this report constitute the bulk of the “and Science” portion of CHIPS and Science. While a full index of all provisions is not the goal here, it’s worth remembering the sheer variety of activities authorized in CHIPS and Science, which cut across a few broad areas including:
- Fundamental science and curiosity-driven research funded by science agencies at federal labs, universities, and companies. CHIPS and Science covered multiple disciplines but has a particular emphasis on the physical sciences, math and computer science, and engineering. Several of these disciplines have fallen dramatically within the federal portfolio in recent decades.
- Use-inspired research, translation, and production. Elements of CHIPS and Science sought to expand the ability of federal agencies to make strategic investments in emerging technologies, move new advances through the innovation chain, and work with external partners to enable the manufacture of new technologies and strengthen supply chains.
- Regional innovation. A major element of the above is emphasis on expanding the geographic footprint of federal investment, most notably through the new Regional Technology and Innovation Hubs program. The program received $500 million out of an authorized $10 billion in the FY 2023 omnibus.
- STEM education and workforce. The Act expands or creates numerous programs to foster STEM skills, opportunity and experience among students and young researchers, including through entrepreneurial fellowships, student and educator support, and apprenticeships and worker upskilling initiatives.
- Research facilities and instrumentation at national labs and universities across the country, including modernization of aging infrastructure, construction of cutting-edge user facilities, and grants for mid-scale research infrastructure projects.
Agency Fiscal Aggregates
In the aggregate, CHIPS authorized three research agencies – the National Science Foundation (NSF), the Department of Energy Office of Science (DOE Science), and the National Institute of Standards and Technology (NIST) – to receive $22.4 billion in FY 2023. The final omnibus provided $19.6 billion in the aggregate, amounting to a $2.7 billion or 12% shortfall.

As seen in Table 1, the largest shortfall from the CHIPS authorization was NSF at $2 billion or 17% below the target. NIST’s appropriation actually surpassed the authorization by $103 million, but this figure includes $395 million in earmarks. Excluding earmarks, the NIST topline appropriation tallied to $1.3 billion, a $292 million or 19% shortfall below the authorization. Note the figures in Table 1 include $1.0 billion in supplemental appropriations for NSF – amounting to its entire year-over-year increase in FY 2023 – and $27 million in supplemental appropriations for NIST.
The White House requested an aggregate $21.7 billion for FY 2024: a $2.8 billion or 15% increase above FY 2023 omnibus levels (including earmarks and supplemental spending) but still $5.1 billion or 19% below the CHIPS and Science authorization in the aggregate. Again, NSF would be subject to the biggest miss below the authorized target.
Agency Breakdowns
National Science Foundation
NSF is at the core of the CHIPS and Science goals in manifold ways. It boasts a long-term track record of excellence in discovery science at U.S. universities and is the first or second federal funder of research in several tech-relevant science and engineering disciplines. It also seeks to boost the talent pipeline by engaging with underserved research institutions and student populations, supporting effective STEM education approaches, and providing fellowships and other opportunities to students and teachers.
CHIPS and Science also expanded NSF’s ability to drive technology, innovation, and advanced manufacturing, augmenting existing innovation programs like the Engineering Research Centers and the Convergence Accelerators with new activities like the Regional Innovation Engines.

As seen in Table 2, the FY 2023 appropriation for NSF – including $1.0 billion in supplemental spending – fell $2.0 billion or 17% below the CHIPS and Science target, while the FY 2024 request is $4.3 billion or 28% below the FY 2024 target. Additional details and comparisons between appropriations, authorizations, and the request follow.
Research & Related Activities (R&RA). R&RA is the primary research account for NSF, supporting grants, centers, instrumentation, data collection, and other activities across seven directorates including the new Technology, Innovation, and Partnerships (TIP) directorate. R&RA can likely absorb substantial additional funding: the agency must routinely leave thousands of high-scoring grant proposals on the table for lack of funding. For instance, in FY 2020 alone, NSF had to leave over 4,000 proposals ranked “Very Good” or better unfunded. These amounted to $3.9 billion in total unfulfilled award funding. A brief look at specific line items within the R&RA account are below.
- Core Research Directorates. Most R&RA funding is channeled through the six research directorates focusing on biology, computing and information science, engineering, geoscience, math and computer science, and social science, as well as integrated and international programs. These directorates play a foundational role in fostering U.S. scientific disciplines, including several that are germane to CHIPS technology priorities. Congress typically – and wisely – does not provide appropriations by individual directorate, and instead appropriates a lump sum for R&RA that is then allocated by the agency, though appropriators do sometimes specify funding amounts for line items or research topics. Accordingly, most of the R&RA authorization in CHIPS and Science is unspecified with two exceptions: $55 million for mid-scale research infrastructure projects, which was fully funded in the FY 2023 omnibus; and the TIP Directorate, covered below. Excluding these elements, core R&RA funding received $6.9 billion in the FY 2023 omnibus, about $591 million or 8% below the authorized level. Core R&RA funding in the request – again excluding TIP and mid-scale infrastructure – is $846 million or 10% below the authorized level.
- Directorate for Technology, Innovation and Partnerships (TIP). (FY 2023 funding: $880 million, $620 million below authorization; FY 2024 OMB request: $1.2 billion, $2.2 billion below authorization). NSF TIP was formally established in the CHIPS and Science Act to support translational, use-inspired, and solutions-oriented R&D and to deploy novel funding modes to accelerate innovation. Authorizers set a TIP funding target of $1.5 billion in FY 2023 and $3.4 billion in FY 2024, representing by far the most ambitious appropriations targets in the bill. FY 2023 funded was allocated by the agency rather than Congress, which should continue its practice of lump-sum appropriations for R&RA mentioned above. The FY 2024 TIP request is billions short of the authorized level yet also seeks the largest increase of any NSF directorate. Within TIP, CHIPS authorized $6.5 billion over five years combined for regional innovation engines and innovation translation accelerators and $125 million over five years for NSF entrepreneurial fellows, along with test beds, scholarships, R&D, and other activities.
STEM Education. The Directorate for STEM Education houses NSF activities across K-12, tertiary education, learning in informal settings, and outreach to underserved communities. CHIPS and Science authorized multiple individual programs including:
- Graduate Research Fellowship Program (FY 2023 funding: $322 million, $94 million below authorization; FY 2024 request: $380 million, $74 million below authorization). The program provides an excellent opportunity for students pursuing STEM careers while seeking to broaden participation.
- Robert Noyce Teacher Fellowship Program (FY 2023 funding: $69 million, $5 million below authorization; FY 2024 OMB request: $77 million, $3 million below authorization). The fellowship provides stipends, scholarships, and programmatic support to prepare and recruit highly skilled STEM professionals to become K-12 teachers in high-need districts. The CHIPS and Science Act aims to increase outreach to historically Black colleges and universities, minority institutions, higher education programs that serve veterans and rural communities, labor organizations, and emerging research institutions.
- CyberCorps (FY 2023 funding: $86 million, $16 million above authorization; FY 2024 OMB request: $74 million, $2 million above authorization). One of the few programs for which funding topped CHIPS authorized levels, CyberCorps aims to address the shortage of cybersecurity educators and researchers and augment the federal workforce by funding scholarships in exchange for a period of federal service.
- Centers for Transformative Education Research and Translation: This new program is intended to pursue multidisciplinary R&D into education innovations. under-resourced schools and learners in low-resource or underachieving local educational agencies in urban and rural communities.

Cross-Cutting Investments in Key Technology Focus Areas. Several NSF investments are related to the key technology areas and societal challenges prioritized in CHIPS section 10387 mentioned above. A breakout of some of these is below, taken from the NSF budget justification. Funding for these research activities is spread across all NSF directorates.

Department of Energy Office of Science
The Office of Science (SC) is the largest funder of the physical sciences including chemistry, physics, and materials, all of which contribute to the technology priorities in CHIPS and Science. In addition to funding Nobel prizewinning basic research and large-scale science infrastructure, the Office also funds workforce development, use-inspired research, and user facilities that provide tools for tens of thousands of users each year, including hundreds of small and large businesses that use these services to drive breakthroughs. More than two thirds of SC-funded R&D is performed at national labs. SC also supports workforce development and educational activities for students and faculty to expand skills and experience.

As seen in Table 4, the FY 2023 omnibus topline for SC was $802 million or 9% below the authorized amount, while the FY 2024 OMB request was $741 million or 8% below the FY 2024 authorization – and indeed even fell below the FY 2023 authorization, similar to NSF’s request. Most programs would see only moderate funding increases, with fusion clearly prioritized.

Funding above or below authorizations in millions

Funding above or below authorizations in millions
- Advanced Scientific Computing Research (ASCR) funds research in AI, computational science, mathematics, and networking. Among CHIPS and Science priorities, ASCR will begin to establish a dedicated Quantum Network along with other research, testbeds, and applications in FY 2024. CHIPS authorized quantum network infrastructure at $100 million in FY 2024 and quantum hardware and research cloud access at $31.5 million in FY 2024. CHIPS and Science also authorized Computational Sciences Graduate Fellowships at $16.5 million in FY 2024.
- Basic Energy Sciences (BES), the largest SC program, supports fundamental science disciplines with relevance for several CHIPS technology areas including materials, microelectronics, AI, and others, as well as extensive user facilities and novel initiatives like the Energy Earthshots. CHIPS and Science priorities included research and innovation hubs related to artificial photosynthesis ($100 million authorized in FY 2024) and energy storage ($120 million authorized in FY 2024). It also authorized $50 million per year for carbon materials and storage research in coal-rich U.S. regions. The FY 2024 request prioritizes budget growth in support of the program’s x-ray light sources and neutron sources.
- Biological and Environmental Research (BER) supports research in biological systems science including genomics and imaging, and in earth systems science and modeling. BER programs would generally see minimal changes from FY 2024, with a moderate funding boost for the Biopreparedness Research Virtual Environment to expand to include low dose radiation research, a CHIPS and Science priority area.
- Fusion Energy Sciences (FES) supports research into matter at high densities and temperatures to lay the groundwork for fusion as a future energy source. Following the breakthrough at the National Ignition Facility, the FY 2024 request ramps up commercial development and places industry partnerships – including technology roadmapping for a fusion pilot project as highlighted in CHIPS and Science – and establishment of four new fusion R&D centers among its fiscal priorities. It also seeks to largely sustain support for international ITER project.
- High Energy Physics (HEP) studies fundamental particles constituting matter and energy. The FY 2024 request prioritizes the Long Baseline Neutrino Facility/Deep Underground Neutrino Experiment (LBNF/DUNE) while trimming research overall.
- Nuclear Physics (NP) conducts fundamental research to understand the properties of nuclear matter. The FY 2024 budget for NP is nearly flat, with a near-doubling of funding for Brookhaven’s Electron Ion Collider offsetting funding tightening elsewhere.
Cross-Cutting Investments in Key Technology Focus Areas. As with NSF, SC provides data on investments in crosscutting technology areas, some of which were prioritized in CHIPS and Science (Table 5). These investments involve multiple SC programs.

National Institute of Standards and Technology
While smaller than the other agencies covered here, NIST plays a critical role in the U.S. industrial ecosystem as the lead agency in measurement science and standards-setting, as well as funder of world-class physical science research and user facilities. NIST R&D activities cover several CHIPS And Science technology priorities including cybersecurity, advanced communications, AI, quantum science, and biotechnology. NIST also boasts a wide- ranging system of manufacturing extension centers in all 50 states and Puerto Rico, which help thousands of U.S. manufacturers grow and innovate every year.

As seen in Table 6, the NIST topline in the FY 2023 omnibus was $103 million above the CHIPS-authorized level. However, as noted in the above section, NIST received $395 million in Congressionally directed spending or earmarks in FY 2023, mainly for construction projects. Excluding earmarks, the NIST topline amounted to $1.3 billion, a $292 million or 19% shortfall below the authorization. The FY 2024 request is $20 million below the FY 2024 authorized level, with shortfalls in NIST labs programs and industrial technology.
Scientific and Technical Research Services (STRS) is the account for NIST’s national measurement and standards laboratories, which pursue a wide variety of CHIPS and Science-relevant activities in cybersecurity, AI, quantum information science, advanced communications, engineering biology, resilient infrastructure, and other realms. STRS also funds two user facilities, the NIST Center for Neutron Research and the Center for Nanoscale Science and Technology. In addition to FY 2024 investments in climate resilient infrastructure, research instrumentation, and other topics, large CHIPS and Science-relevant program increases include:
- Critical Technology Research ($20 million / 15% increase): In FY 2024, NIST lab programs will seek expanded investment in AI, quantum information science, biotechnology, and advanced communications. This will enable the establishment of AI technology testbeds, improve quantum metrology and support quantum technology development, promote rapid development and translation of biotechnologies and biomanufacturing processes, and advance measurement science and standards for next-generation communications.
- Cybersecurity and Privacy ($20 million / 21% increase): Funding will support research, standards development, and demonstrations of solutions through NIST’s National Cybersecurity Center of Excellence. These activities will touch on an array of critical areas including biometrics, cryptography, Internet of Things devices, and others. NIST will also continue to support cybersecurity workforce development. In addition to the above, NIST also requests $4 million for cybersecurity-relevant activities related to trustworthy supply chains.
Industrial Technology Services is the overarching account funding the Hollings Manufacturing Extension Partnership (MEP) and the Manufacturing USA innovation network. As can be seen in Table 6, these programs collectively faced a much greater authorization shortfall than NIST lab programs in the FY 2023 omnibus, while the FY 2024 request goes to great lengths to increase their funding.
- Hollings MEP would use the FY 2024 boost to support the National Supply Chain Optimization and Intelligence Network, to be established in FY 2023, and to expand workforce upskilling, apprenticeships, and partnerships with historically black colleges, minority-serving institutions and community colleges.
- Manufacturing USA would establish testbeds throughout the network of manufacturing innovation institutes established by the Departments of Defense and Energy, support a new NIST-sponsored institute to be competed in FY 2023, and further assist small manufacturers with prototyping and scaling of new technologies.

Future Updates
Federation of American Scientists will create an update to this report as the relevant FY 2024 appropriations bills move through the Congressional process.
Five Ideas for the Education Sciences Reform Act
Earlier this month, the Senate Health, Education, Labor, and Pensions (HELP) committee called on the education community for input on policies to include in a reauthorized Education Sciences Reform Act (ESRA). First enacted in 2002 and last reauthorized in 2008, the ESRA established the Institute for Education Sciences (IES) as the independent research branch of the Department of Education and broadly authorized the federal government to conduct coordinated and scientifically-based research on the US education system. The potential reauthorization of the ESRA by the 118th Congress marks a major opportunity to update and streamline our education research and development (R&D) ecosystem for the modern era.
The Alliance for Learning Innovation (ALI) Coalition, which FAS helps lead, was pleased to submit a response to the Senate HELP committee’s request (read it in full here). The ALI Coalition brings together education nonprofits, philanthropy, and the private sector to advocate for building a better education R&D infrastructure that is based in evidence, centers students and practitioners, advances equity, improves talent pathways, and expands America’s globally competitive workforce.
ALI sees great promise in a robust, inclusive, and updated education R&D ecosystem, with the IES playing a key role. If the 118th Congress decides to reauthorize the ESRA, ALI urges the HELP committee to strengthen our education system by prioritizing the following policies:
Support informed-risk, high-reward research and development, especially with respect to development. Congress should create a National Center for Advanced Development in Education (NCADE), which would catalyze breakthroughs in education research and innovation similarly to how the DARPA model accelerated the study of emerging defense technologies. NCADE would fund informed-risk, high-reward projects developed by universities, nonprofits, industry, or other innovative organizations.
Enhance federal, state, and local education R&D infrastructure. Congress should direct and support IES to research the development of innovative approaches and technologies that improve teaching and learning. IES should also encourage information and data sharing between states by expanding and modernizing the Statewide Longitudinal Data Systems (SLDS) program and providing other forums for interstate connection.
Support the development of diverse education R&D talent. IES should dedicate specific research grant programs for Historically Black Colleges and Universities (HBCUs), Minority-Serving Institutions (MSIs), and Tribally Controlled Colleges and Universities (TCCUs). Additionally, IES should offer “data science fluency training grants” to academic researchers, especially at HBCUs, MSIs, and TCCUs, as well as establish a “rotator program” that would bring in talent with advanced expertise to complement the skills of their current staff.
Drive collaboration between IES, NSF, and other federal agencies. Congress should encourage IES and the new Technology, Innovation, and Partnerships (TIP) Directorate at NSF to collaborate and support R&D programs that enhance research on teaching and learning in emerging technologies that can create efficiencies and improve outcomes.
Promote data privacy. ALI believes the ESRA reauthorization should remain separate from attempts to improve the Family Education Rights and Privacy Act (FERPA). However, Congress should update ESRA to strengthen the U.S. Department of Education’s Privacy Technical Assistance Center (PTAC).
The ALI Coalition knows that a potential ESRA reauthorization is a crucial inflection point for American education. We hope to see Congress strengthen our country’s commitment to education R&D so we can better embrace innovative, evidence-based practices that improve learning outcomes.
How Do OpenAI’s Efforts To Make GPT-4 “Safer” Stack Up Against The NIST AI Risk Management Framework?
In March, OpenAI released GPT-4, another milestone in a wave of recent AI progress. This is OpenAI’s most advanced model yet, and it’s already being deployed broadly to millions of users and businesses, with the potential for drastic effects across a range of industries.
But before releasing a new, powerful system like GPT-4 to millions of users, a crucial question is: “How can we know that this system is safe, trustworthy, and reliable enough to be released?” Currently, this is a question that leading AI labs are free to answer on their own–for the most part. But increasingly, the issue has garnered greater attention as many have become worried that the current pre-deployment risk assessment and mitigation methods like those done by OpenAI are insufficient to prevent potential risks, including the spread of misinformation at scale, the entrenchment of societal inequities, misuse by bad actors, and catastrophic accidents.
This concern is central to a recent open letter, signed by several leading machine learning (ML) researchers and industry leaders, which calls for a 6-month pause on the training of AI systems “more powerful” than GPT-4 to allow more time for, among other things, the development of strong standards which would “ensure that systems adhering to them are safe beyond a reasonable doubt” before deployment. There’s a lot of disagreement over this letter, from experts who contest the letter’s basic narrative, to others who think that the pause is “a terrible idea” because it would unnecessarily halt beneficial innovation (not to mention that it would be impossible to implement). But almost all of the participants in this conversation tend to agree, pause or no, that the question of how to assess and manage risks of an AI system before actually deploying it is an important one.
A natural place to look for guidance here is the National Institute of Standards and Technology (NIST), which released its AI Risk Management Framework (AI RMF) and an associated playbook in January. NIST is leading the government’s work to set technical standards and consensus guidelines for managing risks from AI systems, and some cite its standard-setting work as a potential basis for future regulatory efforts.
In this piece we walk through both what OpenAI actually did to test and improve GPT-4’s safety before deciding to release it, limitations of this approach, and how it compares to current best practices recommended by the National Institute of Standards and Technology (NIST). We conclude with some recommendations for Congress, NIST, industry labs like OpenAI, and funders.
What did OpenAI do before deploying GPT-4?
OpenAI claims to have taken several steps to make their system “safer and more aligned”. What are those steps? OpenAI describes these in the GPT-4 “system card,” a document which outlines how OpenAI managed and mitigated risks from GPT-4 before deploying it. Here’s a simplified version of what that process looked like:
- They brought in over 50 “red-teamers,” outside experts across a range of domains to test the model, poking and prodding at it to find ways that it could fail or cause harm. (Could it “hallucinate” in ways that would contribute to massive amounts of cheaply produced misinformation? Would it produce biased/discriminatory outputs? Could it help bad actors produce harmful pathogens? Could it make plans to gain power of its own?)
- Where red-teamers found ways that the model went off the rails, they could train out many instances of undesired outputs via Reinforcement Learning on Human Feedback (RLHF), a process in which human raters give feedback on the kinds of outputs provided by the model (both through human-generated examples of how a model should respond given some type of input, and with “thumbs-up, thumbs-down” ratings on model-generated outputs). Thus, the model was adjusted to be more likely to give the kind of answer that their raters scored positively, and less likely to give the kinds of outputs that would score poorly.
Was this enough?
Though OpenAI says they significantly reduced the rates of undesired model behavior through the above process, the controls put in place are not robust, and methods for mitigating bad model behavior are still leaky and imperfect.
OpenAI did not eliminate the risks they identified. The system card documents numerous failures of the current version of GPT-4, including an example in which it agrees to “generate a program calculating attractiveness as a function of gender and race.”
Current efforts to measure risks also need work, according to GPT-4 red teamers. The Alignment Research Center (ARC) which assessed these models for “emergent” risks says that “the testing we’ve done so far is insufficient for many reasons, but we hope that the rigor of evaluations will scale up as AI systems become more capable.” Another GPT-4 red-teamer, Aviv Ovadya, says that “if red-teaming GPT-4 taught me anything, it is that red teaming alone is not enough.” Ovadya recommends that future pre-deployment risk assessment efforts are improved using “violet teaming,” in which companies identify “how a system (e.g., GPT-4) might harm an institution or public good, and then support the development of tools using that same system to defend the institution or public good.”
Since current efforts to measure and mitigate risks of advanced systems are not perfect, the question comes down to when they are “good enough.” What levels of risk are acceptable? Today, industry labs like OpenAI can mostly rely on their own judgment when answering this question, but there are many different standards that could be used. Amba Kak, the executive director of the AI Now Institute, suggests a more stringent standard, arguing that regulators should require AI companies ”to prove that they’re going to do no harm” before releasing a system. To meet such a standard, new, much more systematic risk management and measurement approaches would be needed.
How did OpenAI’s efforts map on to NIST’s Risk Management Framework?
NIST’s AI RMF Core consists of four main “functions,” broad outcomes which AI developers can aim for as they develop and deploy their systems: map, measure, manage, and govern.
Framework users can map the overall context in which a system will be used to determine relevant risks that should be “on their radar” in that identified context. They can then measure identified risks quantitatively or qualitatively, before finally managing them, acting to mitigate risks based on projected impact. The govern function is about having a well-functioning culture of risk management to support effective implementation of the three other functions.
Looking back to OpenAI’s process before releasing GPT-4, we can see how their actions would align with each function in the RMF Core. This is not to say that OpenAI applied the RMF in its work; we’re merely trying to assess how their efforts might align with the RMF.
- They first mapped risks by identifying areas for red-teamers to investigate, based on domains where language models had caused harm in the past and areas that seemed intuitively likely to be particularly impactful.
- They aimed to measure these risks, largely through the qualitative, “red-teaming” efforts described above, though they also describe using internal quantitative evaluations for some categories of risk such as “hate speech” or “self-harm advice”.
- And to manage these risks, they relied on Reinforcement Learning on Human Feedback, along with other interventions, such as shaping the original dataset and some explicitly “programmed in” behaviors that don’t rely on training in behaviors via RLHF.
Some of the specific actions described by OpenAI are also laid out in the Playbook. The Measure 2.7 function highlights “red-teaming” activities as a way to assess an AI system’s “security and resilience,” for example.
NIST’s resources provide a helpful overview of considerations and best practices that can be taken into account when managing AI risks, but they are not currently designed to provide concrete standards or metrics by which one can assess whether the practices taken by a given lab are “adequate.” In order to develop such standards, more work would be needed. To give some examples of current guidance that could be clarified or made more concrete:
- NIST recommends that AI actors “regularly evaluate failure costs to inform go/no-go deployment decisions throughout the AI system lifecycle.” How often is “regularly?” What kinds of “failure costs” are too much? Some of this will depend on the ultimate use cases as our risk tolerance for a sentiment analysis model may be far higher than risk tolerance with a medical decision support system.
- NIST recommends that AI developers aim to understand and document “intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed.” For a system like GPT-4, which is being deployed broadly and which could have use cases across a huge number of domains, the relevant context appears far too vast to be cleanly “established and understood,” unless this is done at a very high level of abstraction.
- NIST recommends that AI actors make “a determination as to whether the AI system achieves its intended purpose and stated objectives and whether its development or deployment should proceed”. Again, this is hard to define: what is the intended purpose of a large language model like GPT-4? Its creators generally don’t expect to know the full range of its potential use cases at the time that it’s released, posing further challenges in making such determinations.
- NIST describes explainability and interpretability as a core feature of trustworthy AI systems. OpenAI does not describe GPT-4 as being interpretable. The model can be prompted to generate explanations of its outputs, but we don’t know how these model-generated explanations actually reflect the system’s internal process to generate its outputs.
So, across NIST’s AI RMF, while determining whether a given “outcome” has been achieved could be up for debate, nothing stops developers from going above and beyond the perceived minimum (and we believe they should). This is not a bug of the framework as it is currently designed, rather a feature, as the RMF “does not prescribe risk tolerance.” However, it is important to note that more work is needed to establish both stricter guidelines which leading labs can follow to mitigate risks from leading AI systems, and concrete standards and methods for measuring risk on top of which regulations could be built.
Recommendations
There are a few ways that standards for pre-deployment risk assessment and mitigation for frontier systems can be improved:
Congress
- Congress should appropriate additional funds to NIST to expand its capacity for work on risk measurement and management of frontier AI systems.
NIST
- Industry best practices: With additional funding, NIST could provide more detailed guidance based on industry best practices for measuring and managing risks of frontier AI systems, for example by collecting and comparing efforts of leading AI developers. NIST could also look for ways to get “ahead of the curve” on risk management practices, rather than just collecting existing industry practice, for example by exploring new, less well-tested practices such as violet teaming.
- Metrics: NIST could also provide more concrete metrics and benchmarks by which to assess whether functions in the RMF have been adequately achieved.
- Testbeds: Section 10232 of The CHIPS and Science Act authorized NIST to “establish testbeds […] to support the development of robust and trustworthy artificial intelligence and machine learning systems.” With additional funds appropriated, NIST could develop a centralized, voluntary set of test beds to assess frontier AI systems for risks, thereby encouraging more rigorous pre-deployment model evaluations. Such efforts could build on existing language model evaluation techniques, e.g. the Holistic Evaluation of Language Models from Stanford’s Center for Research on Foundation Models.
Industry Labs
- Leading industry labs should aim to provide more insights to government standard-setters like NIST on how they manage risks from their AI systems, including by clearly outlining their safety practices and mitigation efforts as OpenAI did in the GPT-4 system card, how well these practices work, and ways which they could still break in the future.
- Labs should also aim to incorporate more public feedback in their risk management process, to determine what levels of risk are acceptable when deploying systems with potential for broad public impact.
- Labs should aim to go beyond the NIST AI RMF 1.0. This will further help NIST in assessing new risk management strategies that are not part of the current RMF but could be part of RMF 2.0.
Funders
- Government funders like NSF and private philanthropic grantmakers should fund researchers to develop metrics and techniques for assessing and mitigating risks from frontier AI systems. Currently, few people focus on this work professionally, and funds to support more research in this area could have broad benefits by encouraging more work on risk management practices and metrics for frontier AI systems.
- Funders should also make grants to AI projects conditional on these projects following current best practices as described in the NIST AI RMF.
State Department Must Urgently Update the Exchange Visitor Skills List To Safeguard American Interests
The J1 visa Exchange Visitor Program, a pivotal mechanism fostering global cultural exchange and dissemination of specialized knowledge, remains regrettably stagnant since its last renewal on April 30th, 2009, just about 14 years ago. The anachronistic nature of this scheme inadvertently undermines American interests, a phenomenon glaringly evidenced by the manner in which it impacts Chinese exchange students and investigators focusing on Science, Technology, Engineering, and Mathematics (STEM) disciplines.
The Exchange Visitor Skills List delineates fields embodying specialized knowledge and abilities deemed indispensable for a participant’s nation of origin’s advancement. As stipulated in Section 212(e) of the Immigration and Nationality Act, when a skill appears on the State Department’s list of skills (referred to as the “Skills List” hereon) necessary for the development of a particular country, exchange visitors must spend two years in their native country post-program completion. The conceptual foundation underpinning this rule is inherently valid—fostering learning and contributions to both the United States and countries of origin. However, the obsolescence afflicting this Skill List generates unforeseen ramifications that ultimately jeopardize national interests.
China has made tremendous progress in cultivating its STEM competencies over the past decade. Indeed, China now produces twice as many STEM graduates from Master’s programs and is projected to graduate twice as many STEM PhD holders than the United States by 2025. The Australian Strategic Policy Institute further reveals that China prevails internationally across 37 out of 44 vital technology sectors. Despite these prodigious advancements, archaic regulations continue to include computer science and numerous other STEM fields on the Skills List for China.
This oversight dictates that Chinese students drawn to American institutions find themselves subjected to the two-year home-country physical presence requirement upon program completion. As a result, they expend those critical two years following their education infusing newfound expertise into Chinese industries rather than substantially augmenting US research enterprises or corporations.
Heeding this very predicament, Chinese governmental policies have commenced refusing waivers for students subjected to the aforementioned requirement—shrewdly exploiting America’s outmoded policies.
Mitigating this quandary necessitates decisive action from our governance; revising the extant Exchange Visitor Skills List would rectify these disparities, enabling skilled international workers to collaborate more effectively with the United States. For instance, excising China’s STEM programs from the Skills list would guarantee equitable prospects for nascent global research endeavors within these paramount fields. National capabilities around the world are quickly evolving. Undertaking a regular, systematic, data-informed update would facilitate the United States’ ability to preserve its competitive advantage within vital industries while concurrently nurturing amicable international relations.
The urgency of modernizing the Exchange Visitor Skills List cannot be overstated. By bringing this List up to date, we shall harness—with reciprocity—the ardor and intellect of gifted minds traversing borders while simultaneously defending and advancing American interests.