A nation must think before it acts.
Since the mid-2010s, technology has been the topic du jour in Washington, DC policy circles. Trump’s crusade against Huawei and Biden’s war on China’s semiconductor industry reflect this new norm of the centrality of technology to US politics, as well as foreign policy. With potential applications across nearly every level of government and society, critical and emerging technologies have induced a sense of urgency. Efforts to capitalize on new technologies are fueled by the domestic benefits reaped and the geopolitical imperative of remaining ahead of the innovation, adoption, and implementation curve.
These efforts intersect with an electoral re-match between current President Joseph Biden and former President Donald Trump in 2024. Given the expectation that one of these two candidates will be the next US President, it is worthwhile to assess the likely trajectory of US Executive Branch technology policymaking in the period of 2025 to 2029.
The two major strategic technologies subject to policy change from the 2024 presidential elections are artificial intelligence (AI) and quantum information sciences (QIS). The reasoning for focusing on these—rather than, say, robotics or 5G/6G—spurs the following observations: AI and QIS are widely considered to be of significant geostrategic importance, with potentially transformative economic and defense applications. Moreover, these technology sectors are generating a disproportionate share of industry and commercial interest, coinciding with an intense realignment of the US government’s attitude toward Big Tech and industrial policy. Finally, because there is the unusual benefit of understanding how both Trump and Biden engage in policymaking from the Executive Branch and how each interacts with Congress, technology policy prospects for the 2025 to 2029 period can be articulated with reasonable confidence.
(1) The Trump and Biden administrations exhibit significant overlap in their priorities and agendas in AI and QIS policymaking, particularly in their uses of Executive Orders and the allocation of funding for research and development by federal agencies. The Biden administration has frequently expanded on initiatives developed in or during the Trump administration (though, not without some substantial divergences). In AI, ensuring American leadership has been a shared priority, with each administration pursuing “trustworthy” AI systems within government: linking the work of the National AI Initiative to the public, widening the use of export controls to curb the development of advanced computing technologies by Chinese companies, and collaborating with industry leaders. A plausible case can be made that the Biden administration’s expansion of export controls and complementary outbound investment regulations are part of a tech containment strategy that was initially formulated within the Trump administration.
In QIS, while executive-level policy in this area is comparatively less wide-ranging than in AI, one finds broad underlying agreement in priorities and agendas. The signing into law of the National Quantum Initiative Act and the establishment of the National Quantum Initiative Advisory Committee under the Trump administration was expanded by the Biden administration by locating the latter directly under the White House’s authority and “recommitting” to the Trump-era National Quantum Initiative via the CHIPS and Science Act of 2022. Leveraging the research capacities of federal agencies has, finally, been done by both administrations, notably within the Department of Energy and the National Science Foundation.
Policy divergences between the two administrations in these areas do exist, to be sure, and they go beyond mere rhetorical differences. Collaboration with international allies and partners, as well as the vast scope of President Biden’s October 2023 AI Executive Order, likely represent significant sources of political disagreement.
(2) While existing precedents for technology policymaking by these administrations offer the most reliable lens through which to project into the future, deteriorating political dynamics in the United States present notable uncertainties. This deterioration leads to an inability to consistently formulate Executive Branch technology policy or and produces fruitless Congressional-Executive negotiations borne of domestic political constraints. As the delay in the passage of security assistance to Ukraine due to failed negotiations over US border security legislation highlights, independent and intractable policy debates are increasingly linked, potentially hampering both in the process.
The implication for technology policymaking in the 2025 to 2029 period is that, even where the next administration may have an underlying policy agreement with the previous two, the possibility that its development and implementation will be negatively affected by seemingly unrelated political disputes cannot be discounted. As we highlight, high-skilled immigration reform (or lack thereof) is one potential uncertainty in this respect, among others.
AI became a mammoth policy topic in 2023. The field’s momentum has been intensifying since the early 2010s, but the world’s introduction to OpenAI’s ChatGPT in November 2022 triggered a broad realization: This technology may finally be ready for primetime.
A focus on AI policy interest in 2023, however, risks overshadowing a longer trend, demonstrating how the Trump and Biden administrations grapple with incentivizing innovation for this technology while regulating it and how that bodes for future Executive Branch developments. Indeed, as Hadrien Pouget and Matt O’Shaughnessy noted in May 2023, the two administrations’ actions have substantive “overlap” once surface-level rhetoric is put aside.
In December 2017, early references to AI in the Trump era can be found in the administration’s National Security Strategy, emphasizing leadership in technology and innovation across “data science, encryption, autonomous technologies, gene editing, new materials, nanotechnology, advanced computing technologies, and artificial intelligence.” The rhetorical orientation here is the effort to “maintain our competitive advantage.”
The pace of AI-related Executive policymaking later accelerated. In February 2018, President Trump signed an Executive Order (EO) focused on maintaining American leadership in the innovation and deployment of AI, with explicit emphasis on the protection of “American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.” This EO established the American Artificial Intelligence Initiative.
The EO detailed a whole-of-society approach to technological breakthroughs (S1), the development of technical standards for AI development and uses (S1 & S2), the need for workforce training (S7), the promotion of public trust (S1, S2, & S6), and international leadership in AI. This was followed by a detailed memorandum to federal agency and department heads in November 2020 by the Office of Management and Budget, per the EO’s requirement.
Finally, a December 2020 EO signed by Trump was aimed directly at “trustworthy” AI, placing the burden of responsibility for its uses on federal government agencies.
President Biden’s administration has accelerated the pace of Executive actions on AI, frequently building from Trump-era efforts. An early example, in May 2021, is the launch of ai.gov through the White House Office of Science and Technology Policy (OSTP), a website designed to link the public with federal government activities on trustworthy AI (highlighting, for example, the work surrounding the National AI Initiative).
In October 2022, the OSTP published a Blueprint for an AI Bill of Rights: a document without legally binding force that nonetheless aims to set the agenda for the protection of Americans from harmful AI-generated outcomes. It is premised upon five principles: (1) Safe and Effective Systems; (2) Algorithm Discrimination Protections; (3) Data Privacy; (4) Notice and Explanation; (5) Human Alternatives, Consideration, and Fallback.
Two Biden administration efforts are especially comprehensive. The most recent is an October 2023 EO on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This EO is sweeping, representing a culmination of months of meetings held by White House officials with industry leaders, advocacy groups, and the like, some of which pre-date ChatGPT.
A sampling of notable components includes an instruction to the Department of Commerce to develop standards for the authentication and watermarking of AI-generated content (e.g., deepfakes) (S4.5) and instructions to the Secretaries of the Departments of State and Homeland Security to modernize immigration pathways for top AI talent (S5.1)
Most dramatically, the EO subjects companies that are developing or planning to develop “dual-use foundation models” of a specified computational complexity (Floating-point Operations) to testing and reporting requirements through Commerce Department. Companies must, should they breach this threshold, report to the federal government on current or planned training, development, and production of models; ownership and possession of model weights; and the results of red-team team testing via National Institute of Standards and Technology guidelines (S4.2). This mandate, which invokes the 1950 Defense Production Act, is designed to mitigate biological and nuclear risks. (Importantly, no existing model breaches this threshold, and may instead be a signal for next generation models.)
This EO also provides numerous directions to the heads of Executive Branch departments to coordinate and collaborate with international allies and partners on AI (S11), including the Secretaries of State, Commerce, and Homeland Security.
The White House announced in April that all of the 180-day federal agency actions specified in the EO have been completed.
The second dramatic action took place one year earlier in October 2022. The Biden administration released a sweeping set of export controls to restrict Chinese companies from obtaining or manufacturing advanced computer chips for AI applications, ostensibly in the military domain. It leveraged the “foreign direct product rule,” initially used by the Trump administration against Chinese technology corporation Huawei. Indeed, Soliman observed in 2022 that the recruitment of export controls as a tool in Washington’s foreign policy toolkit is part of a “tech containment strategy.”
To complement these export controls, Biden signed an EO in August 2023 mandating that US entities must notify the Department of Treasury and earn approval for outbound investment in the sensitive technology sectors of “countries of concern.” The sensitive technology areas identified include “semiconductors and microelectronics, quantum information technologies, and artificial intelligence.” Notably, the federal government will require notifications for outbound investments in AI, which is related to the design, fabrication, and packaging of less-advanced circuits and where AI-enhanced software can have military or intelligence applications.
Finally, the Biden administration is also engaged in ongoing AI-related efforts in security and foreign policy.
In the October 2022 National Security Strategy, a grouping of emerging technologies, inclusive of AI, is anticipated to “transform warfare,” attributing substantial importance to tools such as export controls and a collaborative foreign policy approach with allies and partners that align with this expectation.
In September 2021, the launch of the security partnership between the United States, United Kingdom, and Australia—AUKUS—included an explicit commitment to collaboration in capabilities and interoperability in cyber, AI, quantum technologies, and undersea capabilities in the leaders’ inaugural joint statement. In addition, the Quadrilateral Security Dialogue, or Quad—consisting of the United States, India, Australia, and Japan—is a potentially wide-ranging and ongoing forum for cooperation on AI with a link between innovation and democratic values.
Quantum technologies have also triggered sustained interest within the Executive Branch. Although quantum computing takes center stage, this is only one technology under the broader category of QIS. Subfields in QIS include quantum computing, quantum sensing, and quantum communications.
As Sam Howell illustrates, the technical natures of QIS technologies matter for policymaking, particularly quantum computing. Suffice it to say here, quantum computing theoretically promises enhanced computing speeds and processing power over classical computers by representing information via “qubits” rather than “bits.”
An effort to develop quantum computing in the 1980s has now taken on a national security urgency, as such technology could be used—among other things—to compromise encryption methods used to secure financial information and private communications as well as enhance drug discovery and logistics planning.
In December 2018, President Trump signed the National Quantum Initiative Act into law, establishing the National Quantum Initiative. It provides umbrella authority to government agencies to develop and operate programs relevant to the US pursuit of quantum science and technologies. The Initiative aims to propel American quantum science and technology research and development with a “whole-of-government” approach. The law also authorized $1.2 billion over five years to develop a coordinated framework for advancing QIS technologies between federal research laboratories, academia, and the private sector.
In August 2019, Trump expanded on the Initiative by establishing, via an EO, the National Quantum Initiative Advisory Committee (invoking his authority under the aforementioned law). Per the EO, the Committee advises the Secretary of Energy and the Subcommittee on Quantum Information Science of the National Science and Technology Council, with three research and support-oriented functions.
Finally, in August 2020, the White House OSTP, National Science Foundation (NSF), and Department of Energy (DoE) announced awards totaling $1 billion for QIS and AI research over five years. These awards were allocated to seven institutes devoted to AI under the NSF and five institutions dedicated to quantum science under the direction of the DoE.
The Biden administration has frequently expanded directly on Trump-era QIS efforts. In May 2022, President Biden signed two presidential directives: one to put the National Quantum Initiative’s Advisory Committee directly under the White House’s authority and a second to mitigate security risks posed by quantum computing to cryptographic systems.
The CHIPS and Science Act of August 2022 demonstrates what Gregory Arcuri and Hideki Tomoshige call a “recommitment” to the National Quantum Initiative Act and ongoing federal government efforts in QIS. They highlight how several offices and institutions benefit from the law’s quantum components, including the DoE, National Institute of Standards and Technology, and the NSF.
The scope of this expansion is ambitious. Notable achievements include the DoE’s QUEST program devoted to procuring quantum computing cloud capacity over five years for scientific researchers, a $500 million DoE effort to build out a large-scale quantum network infrastructure, and the NSF Technology and Innovation Directorate’s prioritization of quantum computing. The CHIPS Act also authorized the establishment of regional technology hubs, two of which are currently designated for quantum hardware supply chain strengthening and quantum computing, communications, and related areas.
Finally, the same August 2023 EO on outbound investments outright prohibits investments involving the production of quantum computers and their components, the development of quantum sensors, and the development of quantum networking and communications systems within countries of concern.
Certain tools of the Executive Branch are likely to continue being employed by either Trump or Biden concerning AI. Export controls will continue to be used with wide latitude to prevent Chinese companies from acquiring advanced semiconductor and computing technologies. Notably, Commerce Secretary Gina Raimondo said in December 2023 that export controls on China will be a continuing reality that industry leaders must confront, with considerations for controls on the “most sophisticated AI and all the products that flow from that,” in addition to biotechnology and quantum computing. Raimondo has since emphasized that the United States is routinely monitoring for additional export control opportunities. In May 2024, the United States revoked export licenses that allowed Intel and Qualcomm to sell semiconductors to Huawei.
Outbound investment restrictions on sensitive technology areas in China are, furthermore, unlikely to be contested by the incoming administration. It is also unlikely that guidance related to the modernization of federal government agencies pertaining to the uses of AI, as well as that related to the protection of consumers from harmful AI-generated outcomes, will be a source of serious consternation, as both administrations have, with some variations, built on these themes. Finally, Executive responsibilities that arose in the wake of the CHIPS Act, such as designating regional technology hubs, are likely to be fulfilled—though, undue partisanship in funding allocation by Congress is conceivable.
Of course, this relative alignment may not be permanent. According to Nancy Scola, while there is significant interest in both the Biden and Trump camps on AI and antitrust, it is not clear “who Trump is going to be listening to” in building a tech agenda, nor that he will remain consistent in his approach.
Three notable uncertainties stand out.
First, Biden’s October 2023 sweeping EO on safe, secure, and trustworthy AI. Should Trump re-take the White House, it is unclear how the document will be received. Many parts of the EO—say, developing standards for the authentication of synthetic content—may be maintained. Others, like the invocation of the Defense Production Act to mandate advanced AI testing and reporting, may be repealed in subsequent EOs. This could occur if it is viewed as a bridge too far for the federal government’s involvement in private sector innovation (Trump’s record with the law is decidedly mixed and politically contingent, with a mismatch between rhetoric and uses).
In addition, it is unclear how a future Trump administration may receive this EO’s directives to Executive Branch departments in collaboration with international allies and partners.
One plausible projection observes that these two administrations thus far have put forward visions that differ in rhetoric but not in fundamental substance. For example, Trump’s February 2019 EO mandates that the United States “must promote an international environment that supports American AI research and innovation and opens markets for American AI industries” (S1, e). The rhetoric is America First, but it shares significant substance with Biden’s emphasis on allied cooperation.
Another interpretation, however, is that rhetoric will increasingly converge with substance in a second Trump administration due to changes in its makeup. Either path is plausible in the 2025 to 2029 term, depending on the staffing of a second Trump administration.
The second uncertainty is immigration gridlock. As Vivek Chilukuri and Howell argue, the fulfillment of the CHIPS Act’s potential depends on the ability of the United States to both attract and retain 300,000 more engineers by 2030 than American universities will produce in that timeframe. As the failed negotiations over US border policy show, even under a sympathetic presidential administration, immigration remains a politically intractable issue.
Finally, a future Trump or Biden administration may be caught off guard by unexpected breakthroughs in AI that fall outside of the existing generative AI, or even machine learning, zeitgeist, for which the October 2023 EO is designed. Much of both administrations’ AI policymaking will be broadly applicable to models across government and commercial applications. But specific mandates—such as those related to the EO’s nod to next-generation systems’ total Floating-Point Operations during training—may be short-lived.
Paradigms like neuro-symbolic AI—which combines traditional (symbolic) and current (machine learning) approaches to AI—are emerging just as the United States constructs a regulatory apparatus and enters a critical election cycle. This could lead to breakthroughs for which such regulatory efforts are simply not designed. Indeed, Martijn Rasser and Kevin Wolf observed in December 2022 that this approach is one element of uncertainty in assessing the utility of US semiconductor and computing equipment export controls (according to them, the usefulness of such controls outweighs this uncertainty).
QIS technologies are less politically salient than AI. QIS policymaking, then, is comparatively less wide-ranging than AI’s.
Quantum sensing—where Howell identifies a five-year timeframe for the development of defense applications—is more pressing than quantum computing and thus is currently receiving category-wide export controls. Biden and Trump will likely remain aligned on a willingness to use export controls to slow down China’s progress in this domain.
Biden has, moreover, reaffirmed the work of the Trump-era National Quantum Initiative’s Advisory Committee by having them report directly to the White House. Neither Trump nor Biden will plausibly halt the “whole-of-government” approach to QIS research and development.
Finally, the bipartisan CHIPS Act has already expanded initiatives that began under the Trump administration, such as the development of quantum networking technology through the DoE’s QUEST program.
One potential curveball is a fundamental breakthrough in quantum computing. In a foreboding example, IBM recently announced the development of a 1,000-qubit quantum chip. Still, experts quickly noted that much of quantum computing’s promise remains theoretical, with greater error-correction techniques needed to harness its power. Shortly thereafter, however, DARPA-funded research envisioned a more efficient way to correct errors in notoriously fragile qubit-based quantum computers.
Yet, some like Meta’s Chief AI Scientist Yann LeCun and senior fellow Mike Schroepfer respectively noted that classical computers continue to offer more efficient problem-solving and that the commercialization of quantum computers is too far away to conceptualize. Furthermore, Amazon Web Services’ Oskar Painter noted that robust, fault-tolerant techniques in quantum computing may be “at least a decade out.” Microsoft’s Matthias Troyer also argued that the practical, cost-effective applications of quantum computing may ultimately be more limited than commonly believed.
It is thus far from guaranteed that policymakers will face urgent breakthroughs in this area in the 2025 to 2029 period. Technologies including quantum sensors and communication may find comparatively more Executive interest instead.
AI and QIS technologies represent two sectors of geostrategic importance whose increased industry fascination coincides with a realignment of US industrial policy and a critical presidential election.
Our assessment here indicates that the precedents set by the Trump and Biden administrations in AI and QIS policymaking frequently walk the same paths, with the latter often affirming and expanding initiatives established by the former.
Still, if Trump returns to the White House in 2025, uncertainties about the trajectory of policymaking in these areas are beginning to come into focus. Among them are the scope of Biden’s 2023 EO on safe, secure, and trustworthy AI, high-skilled immigration reform, and international collaboration with allies and partners.
There are, finally, technical uncertainties that could catch either potential administration off-guard, including breakthroughs outside of natural language processing—or machine learning altogether. Quantum computing, for its part, may lag behind advancements in quantum sensing and communications in the 2025 to 2029 period.