In a recent episode of “The Ezra Klein Show” published on March 4, 2025, in The New York Times, former White House special adviser for artificial intelligence Ben Buchanan made a startling claim: artificial general intelligence (AGI) is likely to arrive during Donald Trump’s second term, perhaps within the next two to three years1. This assertion challenges previous assumptions that AGI was five to fifteen years away and signals a potentially transformative moment in human history that governments, corporations, and individuals may be unprepared to navigate1. This comprehensive analysis examines the claims, evidence, potential implications, and policy considerations surrounding the imminent arrival of AGI as presented in this influential conversation.
The Accelerating Timeline Toward Artificial General Intelligence
According to Buchanan, what has changed is not just the theoretical possibility of AGI but the timeline for its development. While experts previously estimated AGI might take five to fifteen years to develop, many now believe it could arrive within just two to three years1. This dramatic acceleration is based on insider observations of current AI capabilities and development trajectories within leading AI labs. As Buchanan puts it, “The trend line is even clearer now than it was then,” referencing his time in the White House where he witnessed indicators pointing toward this acceleration1.
Buchanan defines these “extraordinarily capable AI systems” as approaching the canonical definition of AGI—a system capable of performing almost any cognitive task a human can do1. While he dislikes the term AGI itself (repeatedly caveating his use of it), he acknowledges that systems matching this general description are approaching rapidly. This is not merely theoretical; concrete examples already demonstrate the trajectory. Klein mentions using Deep Research, an OpenAI product that can produce high-quality analytical research reports in minutes that would take human researchers days to compile1.
The implications extend beyond specialized applications into fundamental economic transformation. As Klein observes, numerous firms report expectations that by the end of 2025 or early 2026, most code will not be written by human beings but by AI systems1. The rate of this transformation is particularly significant because, unlike previous technological revolutions such as electricity or railroads which took decades to implement, AI capabilities are expanding at unprecedented speed1.
National Security Dimensions: The Race With China
A central theme in the conversation is the perceived necessity for the United States to reach AGI before China does. Buchanan argues there are “profound economic, military and intelligence capabilities that would be downstream of getting to AGI or transformative AI,” making AI leadership a fundamental national security imperative1. He invokes John F. Kennedy’s 1962 Rice University speech about space exploration to illustrate the stakes: “Whether it will become a force for good or ill depends on man. And only if the United States occupies a position of pre-eminence can we help decide whether this new ocean will be a sea of peace or a new terrifying theater of war”1.
The Biden administration implemented export controls on advanced semiconductor technology to China specifically to maintain America’s AI advantage. These controls have been controversial—semiconductor companies like Nvidia have argued they limit their market opportunities—but Buchanan defends them as essential for national security, even as he acknowledges they’ve driven China to develop its own computing supply chain110.
The emergence of DeepSeek, a Chinese AI company that has produced competitive AI models reportedly with less computing power than American counterparts, raised concerns about the efficacy of these export controls18. Buchanan downplays this development, arguing that DeepSeek still represents algorithmic efficiency improvements rather than a fundamental breakthrough and that chip export restrictions remain effective and necessary1.
The Biden Administration’s AI Policy Approach
Under President Biden, the administration developed a multifaceted approach to AI governance that sought to balance innovation with safety. As Buchanan explains, the administration established new institutions like the U.S. Artificial Intelligence Safety Institute to focus on safety concerns including cyber risks, biorisks, and accident risks1. The administration issued an executive order on AI and a national security memorandum addressing AI in defense applications1.
Buchanan emphasizes that beyond a requirement for leading AI labs to share safety test results, most of the Biden administration’s approach was voluntary rather than regulatory1. The goal was to create a foundation for future governance while not impeding development. This foundation included export controls on advanced chips to China, creation of the AI Safety Institute, the executive order on AI, and efforts to accelerate domestic power development for AI infrastructure111.
One crucial insight Buchanan offers is that AI represents a departure from previous revolutionary technologies because it is not funded by the Department of Defense. He notes, “This is the first revolutionary technology that is not funded by the Department of Defense, basically,” contrasting it with nuclear science, space technology, early internet, microprocessors, and other historically significant innovations where government leadership shaped development1.
The Trump Administration’s Shift in AI Policy

The transition to the Trump administration has brought significant changes in AI policy direction. In January 2025, President Trump signed an executive order revoking Biden’s AI executive order, characterizing it as imposing “onerous and unnecessary government control over the development of AI”5. The Trump order directs departments and agencies to “revise or rescind all policies, directives, regulations, orders, and other actions taken under the Biden AI order that are inconsistent with enhancing America’s leadership in AI”5.
The philosophical shift was clearly articulated by Vice President J.D. Vance at an AI summit in Paris, where he stated: “When conferences like this convene to discuss a cutting-edge technology, oftentimes I think our response is to be too self-conscious, too risk averse. But never have I encountered a breakthrough in tech that so clearly calls us to do precisely the opposite”1. This signals a more accelerationist approach to AI development.
The new administration brings influential tech figures into government, including Elon Musk and Marc Andreessen, who hold strong views about AI development. Andreessen has publicly criticized the Biden administration’s approach, claiming they wanted to limit AI to “two or three large companies” and eliminate startups in the space—a characterization Buchanan disputes1.
The Safety Versus Acceleration Debate
A fundamental tension emerges in the conversation between approaches prioritizing AI safety versus those advocating acceleration. Buchanan argues that these aren’t necessarily in conflict, citing historical examples where safety standards facilitated rather than hindered technological adoption. He points to railroad development in the late 1800s, where safety standards like block signaling, air brakes, and standardized tracks made the technology more widely adopted1.
“If you look at the history of technology and technology adaptation, the evidence is pretty clear that the right amount of safety action unleashes opportunity and, in fact, unleashes speed,” Buchanan argues1. However, he acknowledges the risk of overregulation, noting the example of nuclear power, where safety concerns may have “strangled in the crib” a promising technology1.
The accelerationist view, represented by figures in the Trump administration, emphasizes moving quickly and addressing problems as they arise rather than being overly cautious. Harry Glorikian, summarizing Klein and Buchanan’s conversation, notes the gravity of the situation: “Artificial General Intelligence (AGI)—the phrase might spark visions of science fiction, but… it’s fast becoming reality, and if that is the case the implications could be enormous”2.
International Relations and AI Governance
The conversation reveals complex international dynamics around AI governance. While the U.S. competes fiercely with China on AI development, it has also engaged in dialogue on AI safety with Chinese counterparts. Buchanan mentions flying to Geneva to meet with Chinese representatives as part of an “AI dialogue”1.
Relations with European allies also feature prominently. According to Buchanan, the Biden administration took a different approach than the European Union, which has developed the comprehensive EU AI Act. Vance’s speech in Paris signaled that the Trump administration would resist complex multilateral regulations that could slow American AI companies and suggested potential retaliation if European regulations penalized U.S. firms1.
Before leaving office, the Biden administration organized gatherings of allied nations to discuss AI safety protocols. Officials from Canada, Kenya, Singapore, the United Kingdom, and the 27-member European Union were set to meet in San Francisco to address issues like detecting and preventing AI-generated deepfakes11. The future of such international cooperation under the Trump administration remains uncertain.
Labor Market Implications and Social Preparedness
One of the most pressing concerns about AGI is its potential impact on labor markets. Klein expresses frustration at the lack of concrete policy proposals to address potential job displacement: “Are we about to enter a world where we are much more digitally vulnerable as normal people? And I’m not just talking about people whom the states might want to spy on. But you will get versions of these systems that all kinds of bad actors will have”1.
Buchanan acknowledges the legitimacy of these concerns but admits the Biden administration had no comprehensive plan to address them: “We were thinking about this question. We knew it was not going to be a question we were going to confront in the president’s term. We knew it was a question that you would need Congress to do anything about”1. He also notes that any solution would need to preserve the “dignity that work brings”1.
The conversation reveals a significant gap between the expected speed of AI transformation and society’s preparedness for its consequences. As Klein points out, there are no solid policy proposals on the shelf for addressing severe labor market disruptions, despite years of warning about the possibility1. This preparation gap extends to other domains, including the security of AI labs themselves, which may be vulnerable to hacking or industrial espionage1.
Surveillance, Security, and Democratic Values
The potential for AI to enhance surveillance capabilities raises profound concerns for democratic societies. Buchanan acknowledges that while AI might enable more efficient analysis of already-collected data (like satellite imagery), there are legitimate concerns about surveillance states becoming more effective1. He notes that in autocracies like China, AI could make government control more pervasive, eliminating the traditional breathing room captured in the saying “Heaven is high, and the emperor is far away”1.
For democratic societies, AI in law enforcement presents both opportunities and risks. Buchanan mentions the Department of Justice developing principles for AI use in criminal justice, acknowledging potential benefits of consistency but also “tremendous risk of bias and discrimination” and “a risk of a fundamental encroachment on rights from the widespread unchecked use of AI in the law enforcement system”1.
Security dimensions extend to potential vulnerabilities in AI systems themselves. Buchanan discusses how more powerful AI could benefit both offensive and defensive cybersecurity operations1. While offensive actors might find vulnerabilities more easily, defensive systems could write more secure code and better detect intrusions. During a transitional period, however, legacy systems without AI-enhanced protections could be particularly vulnerable1.
Critical Decisions Ahead for AGI Governance
As AGI approaches, several critical policy decisions loom. Buchanan identifies the open-source/open-weights question as particularly significant. This concerns whether the “weights” of AI models—roughly analogous to the strength of connections between neurons in the brain—should be openly published1. Open-weight systems facilitate innovation but also make it easier to remove safety protocols like refusals to assist with weapons development1.
Another key decision involves the relationship between the public and private sectors. As AI becomes more capable, governments will need to determine whether voluntary safety arrangements should become mandatory1. This raises questions about the appropriate level of government oversight and the balance between innovation and regulation.
The use of AI in national defense presents another crucial decision point. Buchanan notes that the Biden administration established guidelines for military AI applications consistent with American values, but the Trump administration will need to decide whether to maintain these safeguards or prioritize speed of development1.
Conclusion: Preparing for an Uncertain Future
The conversation between Klein and Buchanan reveals both the extraordinary potential and profound uncertainty surrounding AGI development. As Buchanan puts it, “I think there should be an intellectual humility here. Before you take a policy action, you have to have some understanding of what it is you’re doing and why”1. This humility extends to recognizing the limits of current understanding while still preparing for transformative change.
Klein expresses frustration with this tension, noting the disconnect between the massive implications described and the measured policy responses proposed1. This disconnect underscores a broader societal challenge: how to prepare for a technology that could fundamentally transform human civilization when we cannot fully anticipate what that transformation will look like.
What emerges clearly from the conversation is that AGI is not merely a technological issue but one with profound implications for national security, economic structures, democratic values, and human flourishing. As Harry Glorikian comments on the conversation, “Think institutions, labor markets, regulatory systems, and even cultural values will be tested like never before if this becomes a reality”2. Whether society can rise to this unprecedented challenge remains an open question, but the urgency of beginning that preparation is undeniable.1: https://www.nytimes.com/2025/03/04/opinion/ezra-klein-podcast-ben-buchanan.html2: https://www.linkedin.com/posts/harryglorikian_opinion-the-government-knows-agi-is-activity-7302732011666886656-VpoT5: https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership/8: https://substack.com/home/post/p-15843094210: https://forum.effectivealtruism.org/posts/Pt7MxstXxXHak4wkt/agi-timelines-in-governance-different-strategies-for11: https://apnews.com/article/ai-safety-summit-san-francisco-trump-biden-executive-order-0e7475371877c7fefbbf178759fe7ab7
Citations:
- https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/17307335/bdd0d99b-0633-4613-8377-171b01f3f428/Opinion_The-Government-Knows-A.G.I.-Is-Coming-The-New-York-Times.pdf
- https://www.linkedin.com/posts/harryglorikian_opinion-the-government-knows-agi-is-activity-7302732011666886656-VpoT
- https://www.youtube.com/watch?v=YBy3VmIEGYA
- https://substack.com/home/post/p-158425955
- https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-takes-action-to-enhance-americas-ai-leadership/
- https://www.youtube.com/watch?v=Btos-LEYQ30
- https://audio.nrc.nl/episode/126291893
- https://substack.com/home/post/p-158430942
- https://garymarcus.substack.com/p/ezra-kleins-new-take-on-agi-and-why
- https://forum.effectivealtruism.org/posts/Pt7MxstXxXHak4wkt/agi-timelines-in-governance-different-strategies-for
- https://apnews.com/article/ai-safety-summit-san-francisco-trump-biden-executive-order-0e7475371877c7fefbbf178759fe7ab7
- https://podcasts.apple.com/us/podcast/the-government-knows-agi-is-coming/id1548604447?i=1000697606428
- https://irepod.com/podcast/the-ezra-klein-show-1/the-government-knows-agi-is-coming
- https://www.greaterwrong.com/posts/YcZwiZ82ecjL6fGQL/the-government-knows-a-g-i-is-coming
- https://www.rand.org/content/dam/rand/pubs/perspectives/PEA3600/PEA3691-4/RAND_PEA3691-4.pdf
- https://www.reddit.com/r/ezraklein/comments/1j3839n/the_government_knows_agi_is_coming_the_ezra_klein/
- https://csis-website-prod.s3.amazonaws.com/s3fs-public/2024-12/241209_AI_Outlook_Buchanan.pdf?VersionId=jFIo8sZHaElyIcYpnZkBxBjDHcgKwQ3R
- https://news.bloomberglaw.com/us-law-week/trumps-ai-policy-shift-promotes-us-dominance-and-deregulation
- https://www.lesswrong.com/posts/YcZwiZ82ecjL6fGQL/the-government-knows-a-g-i-is-coming
- https://www.aclu.org/news/privacy-technology/trumps-efforts-to-dismantle-ai-protections-explained
- https://www.nytimes.com/2025/03/04/opinion/ezra-klein-podcast-ben-buchanan.html
- https://www.nytimes.com/column/ezra-klein-podcast
- https://podcasts.apple.com/us/podcast/the-ezra-klein-show/id1548604447
- https://www.reuters.com/technology/artificial-intelligence/trump-announce-private-sector-ai-infrastructure-investment-cbs-reports-2025-01-21/
- https://podcasts.apple.com/fi/podcast/the-ezra-klein-show/id1548604447
- https://app.podscribe.ai/series/1531400
- https://x.com/ezraklein
- https://www.reddit.com/r/samharris/comments/1j3afn6/the_government_knows_agi_is_coming_the_ezra_klein/
- https://open.spotify.com/show/3oB5noYIwEB2dMAREj2F7S
- https://www.youtube.com/watch?v=305ZAppMlN8
- https://audio.nrc.nl/episode/80948893
- https://www.linkedin.com/posts/sheppardadam_transcript-ezra-klein-interviews-dario-amodei-activity-7189380976697905152-ZzoO
- https://www.lawfaremedia.org/article/ai-timelines-and-national-security–the-obstacles-to-agi-by-2027
- https://www.science.org/doi/abs/10.1126/science.ado7069
- https://yoshuabengio.org/2024/10/30/implications-of-artificial-general-intelligence-on-national-and-international-security/
- https://www.rcrwireless.com/20250204/uncategorized/agi-at-davos-risk-control-ai-infrastructure-arms-race
- https://www.lesswrong.com/posts/YcZwiZ82ecjL6fGQL/nyt-op-ed-the-government-knows-a-g-i-is-coming
- https://www.youtube.com/watch?v=qpTRc5qkX2M
- https://www.science.org/doi/10.1126/science.ado7069
- https://x.com/marcidale/status/1896877492650360910