ARTICLE
24 December 2025

The U.S. Government: The Greatest AI Investor — And Its Slowest Adopter

AI moves in dog years while government modernization moves in geologic time. AI won't wait, so government can't afford to.
United States Technology
Edward Hanapole’s articles from Alvarez & Marsal are most popular:
  • within Technology topic(s)
Alvarez & Marsal are most popular:
  • within Law Department Performance and Insolvency/Bankruptcy/Re-Structuring topic(s)

AI moves in dog years while government modernization moves in geologic time. AI won't wait, so government can't afford to.

In 2019, GPT-2 had 1.5 billion parameters.1 By 2023, GPT-4 was rumored to have nearly a trillion — a 700x scale increase in four years.2 Yet, while AI capabilities double every few months, the government's technology refresh cycles unfold over years. Many federal systems — including secure, high-performing mainframes that still run critical workloads — were designed for reliability and control, not rapid evolution. The result is a pace mismatch: AI innovation is compounding exponentially, while government modernization moves linearly. The gap is widening — and only one side is accelerating.

Washington's AI Paper Trail

No one can accuse Washington of ignoring AI. In the last decade alone, federal agencies have published more than 150 AI-related directives, strategies, frameworks, and toolkits — executive orders (EO), OMB memoranda, NIST guidelines, GAO reports. Amongst these, the White House has framed AI as having "...the potential to transform the global economy and alter the balance of power in the world,3" rolling out orders to remove barriers to leadership (EO 14179), prepare Americans for the jobs of the future (EO 14278), and educate kids on AI (EO 14277).4 This cottage industry of initiatives though has created more confusion than empowerment as agencies layer AI bans across its workforce. That may be changing at the policy level, with a recent EO establishing a unified national AI policy and curbing regulatory fragmentation to reinforce U.S. competitiveness.5

The government historically created a patchwork of AI adoption. USAID briefly ran on GPT Enterprise, a secure model with firewalls, before it was dissolved.vi GSA awarded $1 AI contracts that immediately came under protest.7 Walk into a federal agency today and it's a coin flip—one office embraces AI, the next bans it outright8. That's not governance — that's roulette.

But behind the blizzard of memos and speeches lies a unique story: the U.S. government was the original venture capitalist for AI, having invested over half a trillion dollars into it since the Eisenhower era. DARPA funded the AI godfathers McCarthy, Minsky, Simon, and Newell in the 1950s and '60s. In the 1980s, billions went into the Strategic Computing Initiative, laying the groundwork for self-driving cars and speech recognition. Fast forward and you see DARPA's $2 billion AI Next campaign, NSF's $200 million AI institutes, and the CHIPS & Science Act's billions in research funding.9 DARPA has spent more on AI than any single tech company in history. And yet — the government until recently has been one of the only institutions discouraging its employees from using the tools it funded.

The private sector has no such hang-ups. Over 85% of Fortune 500 firms are experimenting with generative AI.x They're piloting copilots, automating workflows, cutting costs, and building new revenue streams. The real unlock? Companies have created sandboxes for employees to get their hands dirty and actively experiment with AI.

The Shadow AI Ecosystem

When you block technology, employees don't stop using it — they just stop telling you about it. MIT researchers describe a growing "shadow AI economy," where untrained staff adopt consumer AI tools outside official channels.11 And the risks of "shadow AI" aren't theoretical. Samsung employees pasted sensitive source code into ChatGPT and within weeks, the company banned the tool.12 Another case: DeepSeek, a Chinese AI, has been documented quietly exfiltrating inputs from foreign users back to its home country.13 Innovation Resistance Theory explains this perfectly: the more leaders stall adoption, the more employees turn underground.14 The dynamic is as old as prohibition. Shut the bar, the speakeasy opens.

Move Fast (But Don't Break Things)

The popular advice from the market to government right now is to move slowly. Define use cases. Study the risks. Be patient. Wait for direction.

That's exactly wrong.

The fix: de-risk, then democratize. Stand up enterprise versions of AI tools — firewalled, no data leakage, intuitive interfaces. Segment secure environments for higher risk business units, so high-security unit data isn't exposed to a lower-security neighbor. Launch controlled "sandboxes" where staff are encouraged to utilize models safely.

From there, build a readiness and governance assessment: pilot quickly, scale what works, kill what doesn't. Reduce the risk of shadow AI with a safe environment where employees can utilize the tool, then work backward to your larger pilot developments. The Department of War's GenAI.mil rollout provides employees access to sanctioned, enterprise-grade AI while reducing reliance on shadow tools.15

Skeptics point to failure rates: MIT found 95% of corporate AI pilots flop.16 Even Sam Altman warns of an AI bubble.17 Fair. But failure isn't a bug — it's a feature of innovation. Universities aren't prophets, and Altman's bubble talk is as much about competitors as it is about markets. By that metric, the internet was a failure in the 1990s when dot-coms went bust. Spoiler: it wasn't.

The U.S. government may be the greatest AI investor in history. But after the U.S. bankrolled the future for the world — it has become the last to show up for it.

Footnotes

1. OpenAI, "Better Language Models" (2019). https://openai.com/index/better-language-models/

2. Wikipedia, "GPT-4." https://en.wikipedia.org/wiki/GPT-4

3. https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/?utm_source=chatgpt.com

4. White House, "Presidential Actions." https://www.whitehouse.gov/presidential-actions/

5. White House, "Presidential Actions." https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law obstruction-of-national-artificial-intelligence-policy/

6. FedScoop, "USAID Adopted GPT Enterprise." https://fedscoop.com/openai-chatgpt-enterprise-usaid/

7. Federal News Network, "GSA's $1 Awards Under Protest" (2025). https://federalnewsnetwork.com/contractsawards/2025/08/gsas-1-awards-for-ai-tools-come-under-protest/?readmore=1

8. FedScoop, "How Risky is ChatGPT? Depends Which Federal Agency You Ask." https://fedscoop.com/how-risky-is chatgpt-depends-which-federal-agency-you-ask/

9. DARPA, NSF, CHIPS & Science Act announcements. https://www.nsf.gov/chips

10. Microsoft, "AI-Powered Success" (2025). https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/07/24/ai-powered success-with-1000-stories-of-customer-transformation-and-innovation/

11. Fortune, "Shadow AI Economy (MIT Study)" (2025). https://fortune.com/2025/08/19/shadow-ai-economy-mit-study genai-divide-llm-chatbots/

12. CNBC, "Samsung Bans AI Use After Leaks" (2023). https://www.cnbc.com/amp/2023/05/02/samsung-bans-use-of-ai like-chatgpt-for-staff-after-misuse-of-chatbot.html

13. YouTube, "DeepSeek AI Risks." https://www.youtube.com/shorts/I_bGa-xIHkk

14. arXiv, "Innovation Resistance Theory and AI Adoption" (2024). https://arxiv.org/abs/2407.10883

15. War.gov, "The War Department Unleashes AI on New GenAI.mil Platform" (2025). https://www.war.gov/News/Releases/Release/Article/4354916/the-war-department-unleashes-ai-on-new-genaimil platform/

16. Fortune, "MIT Report: 95% of AI Pilots Fail" (2025). https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai pilots-at-companies-failing-cfo/

17. CNBC, "Sam Altman: AI Market is in a Bubble" (2025). https://www.cnbc.com/2025/08/18/openai-sam-altman-warns-ai market-is-in-a-bubble.html

Originally published 18 December 2025

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More