
The $90 Trillion Question Nobody’s Asking Correctly
The artificial intelligence revolution isn’t coming—it’s already rewriting the rules of economic survival. While everyone debates whether AI will “take our jobs,” they’re missing the real story: we’re potentially standing at the threshold of the most profound economic transformation in human history, one that makes the Industrial Revolution look like a minor software update.
Stanford economist Charles I. Jones just dropped a research bomb that should fundamentally change how you think about your career, your investments, and your future. His analysis reveals something counterintuitive: even if AI becomes infinitely good at tasks that currently consume 33% of our GDP, the economy might only grow by 50%. Read that again. Infinite AI productivity in a third of the economy equals just 50% more output.
This isn’t a limitation—it’s a revelation about how economies actually work, and it completely changes the AI conversation.
The Two Futures We’re Choosing Between Right Now
Think about your smartphone. In 2007, the iPhone seemed like science fiction. Today, you probably used one to access this article. That transition happened in less than two decades. Now imagine that pace of change, but instead of better phones, we’re talking about machines that can think, create, and innovate at superhuman levels.
We’re facing two radically different possible futures, and understanding them isn’t academic—it directly impacts every financial and career decision you’ll make over the next decade.
Future One: The Growth Explosion
Picture a data center somewhere in California or Washington that houses what Anthropic CEO Dario Amodei calls “a country of geniuses.” Not metaphorical geniuses—actual AI systems running billions of instances simultaneously, each one capable of performing cognitive tasks at or beyond human expert levels.
These AI systems already exist in primitive form. When Anthropic released Claude Opus 4.5, it scored higher on their software engineering assessment than any human candidate in their company’s history. More remarkably, AI models are now completing software engineering tasks that take skilled humans nearly 5 hours, and this capability is doubling every 5 to 7 months. What took AI 5 hours to accomplish 18 months ago now takes just 19 minutes.
The mathematics of AI improvement reads like science fiction, but it’s hard data: the “effective compute” used to train AI models is increasing by a factor of ten annually. That’s four times more computing power combined with 2.5 times better algorithms. Every. Single. Year.
Here’s where it gets wild: these AI geniuses would eventually design even better AI systems, creating a recursive loop of self-improvement. They’d optimize drug discovery, predict clinical trial outcomes, design new materials, solve fusion energy challenges, and fundamentally accelerate R&D across every scientific domain.
We’ve already seen proof of concept. AlphaFold—an AI system that solved the protein-folding problem—determined the three-dimensional structure of over 200 million proteins. This breakthrough earned its creators a Nobel Prize in Chemistry and would have taken human researchers centuries to accomplish manually.
If this trajectory continues, economic growth could accelerate from our historical 2% per year to 10% or higher within decades. Your children could live in a world of abundance that makes our current prosperity look quaint.
Future Two: Business as Usual (Which Is Still Pretty Remarkable)
But there’s another possibility grounded in 150 years of economic history, and it’s equally fascinating.
Look at U.S. GDP per capita since 1870. Plot it on a graph with a logarithmic scale, and you see something remarkable: a nearly perfect straight line climbing upward at roughly 2% annually. That line survived electrification, the internal combustion engine, antibiotics, transistors, semiconductors, personal computers, and the internet—technologies that fundamentally transformed civilization.
Each of these was a “general purpose technology” that touched nearly every aspect of economic life. Yet none of them changed the long-term growth rate. The economy consistently doubled in size every 35 years regardless of whether we were inventing electric motors or launching the internet.
The explanation? Within any technology domain, ideas get harder to find over time. The steam engine eventually runs out of optimization headroom. Without new general purpose technologies emerging, growth would naturally slow. Each successive GPT essentially maintains the 2% growth rate rather than accelerating it.
From this perspective, AI is simply the latest GPT keeping us on the historical trendline. Important? Absolutely. Transformative? Definitely. But ultimately just another chapter in the ongoing story of technological progress, not a fundamental break from economic history.
There’s also the diffusion lag. Economic historian Paul David famously showed that it took decades for the electric motor to show up in productivity statistics, despite being obviously transformative. Factories needed redesigning. Business processes required rethinking. Complementary innovations had to be developed.
Bob Solow quipped in 1987 that “you can see the computer age everywhere but in the productivity statistics.” He was right—but eventually those productivity gains materialized. AI might follow the same pattern: slower to impact GDP than enthusiasts predict, but ultimately profound in its effects.
The Weak Links That Control Your Economic Future
Here’s where Jones’s analysis gets mathematically beautiful and deeply practical. Understanding this concept—called “weak links” in economic theory—will help you make better career and investment decisions than 99% of people freaking out about AI.
Most people assume that if AI automates a task, it simply replaces human labor in that task, and we move on. But economies don’t work like independent Lego blocks you can swap in and out. They work like chains—and chains break at their weakest link.
Imagine production requires completing two tasks: an easy task and a hard task. Now imagine AI becomes infinitely good at the easy task. How much does total output increase? If these tasks are complements (both required for production), the answer is surprising: output is limited by whatever you can produce in the hard task. Making the easy task infinitely easy doesn’t create infinite output—it just shifts the bottleneck.
Jones provides a striking calculation. Software spending represents about 2% of GDP. If AI made software production infinitely productive—literally free and instantaneous—GDP would only increase by approximately 2%. Even if you fully automate all cognitive labor (roughly 33% of GDP), and somehow achieve infinite productivity in those tasks, GDP increases by only 50%.
This seems wrong at first glance. Infinite productivity should mean infinite wealth, right? But that’s not how complementary production works. You need multiple tasks completed successfully to create value, and you’re always constrained by whichever task is currently the bottleneck.
Think about radiology. In 2016, AI pioneer Geoffrey Hinton famously declared we should stop training radiologists because AI would soon replace them. Nearly a decade later, we have more radiologists earning higher salaries, not fewer earning less. Why? Because radiologists do more than read scans. They consult with patients, collaborate with other physicians, make judgment calls about treatment approaches, and handle edge cases. AI automated part of their task bundle, making radiologists more productive at the remaining tasks—which increased demand for their expertise.
This is the weak links framework in action. Automation doesn’t simply replace jobs; it shifts which tasks become the valuable bottlenecks.
What This Means for Your Career
The radiologist example isn’t an isolated case—it’s a template for understanding labor markets in the AI era. Jobs are bundles of tasks, not single uniform activities. AI will automate some tasks within your job bundle while making you more valuable at others.
The critical question isn’t “Can AI do my job?” but rather “Which tasks in my job bundle are the current weak links, and how can I become exceptionally valuable at those bottleneck tasks?”
If you’re a financial analyst, AI might automate data gathering and preliminary analysis, but strategic thinking about market dynamics and client relationship management become the new bottlenecks—and potentially more valuable. If you’re a teacher, AI might automate content delivery and basic assessment, but motivating students, addressing individual learning challenges, and creating classroom culture become the premium skills.
The professionals who thrive won’t be those whose jobs are fully automated or fully protected—they’ll be those who identify which tasks are becoming valuable bottlenecks and position themselves as experts in those areas.
Looking for roles that position you at valuable bottlenecks rather than automatable tasks? HireSleek.com features AI-era career opportunities where human expertise creates irreplaceable value. Whether you’re a professional seeking positions that complement AI capabilities or an employer looking for talent that amplifies your AI investments, HireSleek connects you with opportunities designed for the economy we’re actually entering.
The Timeline Nobody’s Talking About: Why 2045 Matters More Than 2025
One of Jones’s most important insights is about timing, and it’s both reassuring and unsettling.
Even in models where AI eventually leads to explosive economic growth exceeding 5% annually, the transition is surprisingly gradual. After 20 years, output might only be 5% higher than the baseline. After 40 years, perhaps 20% higher. The explosion happens, but it unfolds over decades rather than years.
This has profound implications. The disappointment many feel about AI’s current economic impact—”where’s the productivity boom we were promised?”—might be misguided. The impact is coming, but economic transformations operate on generational timescales, not quarterly earnings cycles.
Consider what happened between 1990 and 2020 with the internet. The first commercial web browser launched in 1993. By 2000, we had the dot-com bubble and bust. Many concluded the internet was overhyped. But by 2020, the internet had fundamentally restructured retail, communication, entertainment, finance, education, and countless other sectors. The full impact took 30 years to materialize.
If AI follows a similar pattern—and Jones’s analysis suggests it will—then judging AI’s economic impact in 2025 based on 10 years of development misses the point entirely. Ask instead: what will the world look like in 2045 after 30 years of AI development?
The professionals and investors who internalize this timeline will make dramatically different decisions than those expecting instant transformation or dismissing AI as hype.
The Redistribution Challenge: When Labor Stops Being Your Main Asset
Here’s an uncomfortable truth: throughout modern economic history, most people’s primary asset has been their labor. You rent your time and skills to employers, and that’s how you capture economic value.
AI fundamentally challenges this arrangement. In a world of substantial automation, the “size of the pie” grows dramatically—but your slice might shrink if your only asset is labor that machines can now provide more cheaply.
This isn’t dystopian speculation; it’s economics. When the marginal cost of cognitive labor approaches zero (or near-zero), wages for purely cognitive tasks must eventually fall toward that marginal cost.
But here’s the crucial point Jones emphasizes: this creates a distribution challenge, not a scarcity problem. The economy produces far more stuff, but how do people access it?
Advanced economies already engage in substantial redistribution through taxation and social programs. In an AI-abundant future, such redistribution becomes both more important and more affordable. When GDP is dramatically higher, redistributing enough to provide everyone with excellent living standards becomes economically feasible in ways it isn’t today.
Jones suggests one intriguing possibility: endowing every child with a share of the S&P 500 stock market index. If AI dramatically increases corporate productivity and profits, giving citizens ownership stakes lets them participate in that wealth creation even if their labor becomes less economically valuable.
This isn’t radical leftism—it’s pragmatic economic thinking about how to maintain social stability and human flourishing when the fundamental nature of value creation changes.
The alternative—letting market forces alone determine distribution in an economy where labor’s value has plummeted—creates political instability that threatens the productivity gains AI enables. No society will tolerate extreme poverty amidst radical abundance, regardless of what pure economic theory suggests.
The Meaning Crisis: What Happens When You’re Not Needed
Economics typically treats work as a “bad”—something people must be compensated to endure. That’s why we pay wages. But this misses something fundamental about human psychology: many people derive meaning from work.
Academics understand this viscerally. Research isn’t just how we earn income—it’s how we find purpose, challenge ourselves, contribute to human knowledge, and build identity. Jones himself wrestles with this: what happens when ChatGPT 6.0 is better than he is at developing growth models? Where does a growth economist find meaning when AI does growth economics better than humans can?
This isn’t a problem to solve with redistribution policies. You can give someone a comfortable income, but that doesn’t automatically provide meaning, purpose, or fulfillment.
Jones offers two imperfect metaphors: retirement and summer camp. Retirees with adequate income find meaning through relationships, experiences, learning, and exploration. Maybe an AI-abundant future looks like that—but for everyone, at every age, with far greater resources to pursue whatever brings fulfillment.
Summer camp offers another angle: structured experiences designed to be enriching and enjoyable rather than economically productive. Perhaps future humans treat learning growth economics from AI the way we currently treat learning pottery or painting—intrinsically valuable regardless of economic output.
The optimistic scenario is that advanced AI understands human psychology better than we understand ourselves, and we can literally ask it for personalized advice on living meaningful lives in a post-scarcity world.
The pessimistic scenario is that meaning derived from productive contribution is so fundamental to human nature that we struggle psychologically in a world where our contributions aren’t economically necessary.
Which future we get depends partly on how we structure society during the transition—which is happening now.
The Existential Risk Nobody Wants to Discuss (But We Must)
The founders of our leading AI companies—Sam Altman, Dario Amodei, Demis Hassabis, Geoffrey Hinton—share something beyond their optimism about AI’s potential. They’ve all warned, repeatedly and publicly, about catastrophic risks.
OpenAI was literally founded as a nonprofit specifically to develop artificial general intelligence safely, avoiding competitive market pressures to deploy powerful AI before safety measures were adequate. That should tell you something.
The risks fall into two categories, both worth taking seriously.
Bad Actor Risk
Imagine AI systems in 5-10 years that master biochemistry and virology at superhuman levels. A bad actor could potentially ask such a system to design a bioweapon—a novel virus more deadly and contagious than Ebola, but with a four-week incubation period before symptoms emerge.
Such a pathogen could spread globally before anyone realized an outbreak was occurring. The damage would be catastrophic.
Nuclear weapons have avoided worst-case scenarios partly because very few actors controlled the “red button.” Advanced AI could mean billions of people potentially have access to civilization-ending technologies. The attack surface for catastrophic misuse becomes enormous.
Alien Intelligence Risk
This one sounds like science fiction, but it’s the concern that keeps serious AI researchers awake at night.
We’re creating forms of intelligence we fundamentally don’t understand. These intelligences are optimizing for objectives we specify, but we’re not confident we can specify the right objectives—or that AI systems will interpret our specifications the way we intend.
Stuart Russell, Berkeley computer science professor and coauthor of the field’s leading graduate textbook, frames it provocatively: “How do we retain power over entities more powerful than us, forever?”
Consider an analogy: imagine tomorrow we discovered an alien spacecraft approaching Earth. Your initial reaction might be excitement—how amazing! But reflection might bring trepidation. Historically, when technologically advanced civilizations encounter less advanced ones, it doesn’t end well for the less advanced party.
We’re building the alien intelligence ourselves. The fact that it comes from human-created data centers rather than distant galaxies doesn’t change the fundamental challenge: how do we ensure superintelligent AI systems remain aligned with human values and under human control?
The Oppenheimer Question: How Much Risk Is Acceptable?
Before the Trinity test of the first atomic bomb, Manhattan Project scientists grappled with a terrifying question: what if the nuclear chain reaction doesn’t stop? What if it ignites the atmosphere and kills everyone on Earth?
Physicist Hans Bethe calculated the probability as very low, and they proceeded. But this raises a profound question: how low does the risk need to be before proceeding is irresponsible?
Jones analyzes an analogous question for AI: suppose AI provides enormous benefits (say, 10% annual economic growth), but comes with a one-time risk of human extinction. What level of existential risk would rational economic agents accept?
The answers are surprising and depend critically on risk aversion.
If people have logarithmic utility (risk aversion equal to 1), they’d accept up to a 33% chance of extinction for 10% growth. That sounds insane, but it reflects how valuable economic growth is when marginal utility decreases slowly.
If risk aversion is 2 instead of 1, the acceptable risk plummets to just 2.5%. With higher risk aversion, utility is bounded—marginal utility falls very rapidly. Life at current living standards is already quite good, so gambling it for extra consumption makes less sense.
The most surprising finding: if AI delivers major health improvements (say, cutting mortality rates in half), even highly risk-averse people become willing to accept large existential risks. The logic is simple: what we care about is not dying. We don’t particularly care what kills us. If AI might cure cancer, heart disease, and aging, that dramatically reduces our mortality risk—potentially more than the existential risk from AI itself increases it.
One key lesson: the health and medical benefits of AI may be particularly valuable compared to pure consumption increases. This should influence both policy and investment priorities.
How Much Should We Spend on AI Safety?
In 2020, we each faced roughly a 0.3% mortality risk from COVID-19. Society responded by essentially shutting down, “spending” approximately 4% of GDP to mitigate that risk.
By revealed preference—what we actually chose to do—we demonstrated willingness to spend massive amounts to avoid relatively small mortality risks. This suggests we should be investing heavily in AI safety if catastrophic risks are even remotely comparable.
The math supports this. U.S. government agencies routinely value a statistical life at $10 million or more for average Americans. To avoid a 1% mortality risk, this implies willingness to pay $100,000—more than 100% of per capita GDP.
If AI poses existential risks that might materialize over the next 10-20 years, an annual investment of 5-10% of income to completely eliminate those risks could be appropriate, even from a purely selfish standpoint (placing zero weight on future generations).
This needs to be multiplied by the effectiveness of mitigation spending—something we have less certainty about. But some forms of mitigation are clearly effective:
- Focus on narrow AI models (like AlphaFold) that accelerate scientific research while posing fewer risks
- Slow development to give safety research more time to mature
- Invest heavily in AI alignment research—essentially a global public good with spillovers across countries and time
- Implement safety standards and testing requirements before deploying powerful models
Jones’s robustness checks suggest we’re likely underinvesting in AI safety by a factor of 30 or more, even under conservative assumptions about risk levels and mitigation effectiveness.
The key insight: at current consumption levels, life is incredibly valuable and the marginal utility of additional consumption is correspondingly low. It’s worth spending surprisingly large amounts to mitigate risks to life based on valuations the U.S. government uses for everyday policy decisions.
Companies building AI safety capabilities and professionals with alignment expertise represent critical bottlenecks in the AI economy. Find AI safety roles and related positions that put you at the forefront of managing transformative technology at HireSleek.com—where forward-thinking employers seek talent for the challenges that will define the next decade.
The Race Nobody Can Win But Everyone’s Running
There’s a prisoners’ dilemma dynamic in current AI development that should concern everyone.
Leading AI labs’ executives have warned about risks. They understand the dangers. Many have personally advocated for slowing down and prioritizing safety.
Yet these same executives are racing to build larger data centers, train more powerful models, and deploy AI capabilities before safety challenges are fully solved.
Why? Because everyone else is racing. Individual lab leaders can reasonably think: “If I slow down unilaterally, that doesn’t meaningfully reduce global existential risk—someone else will build the dangerous system. But if I race and win, maybe I’ll be safer than competitors. And if catastrophic risks don’t materialize, enormous economic gains await the winner.”
The Nash equilibrium is that everyone races even though everyone might be better off if all slowed down simultaneously.
This dynamic is identical to arms races historically—and we developed policy solutions for those. International cooperation on nuclear weapons, despite Cold War tensions, prevented worst outcomes.
One policy worth considering: a substantial tax on GPUs and TPUs (the specialized chips used for AI training). This would slow the race while generating revenue for safety research. Applying the tax at first sale means it affects users regardless of country.
The objection is obvious: China is also racing, and we don’t want China to win. But to the extent China uses Nvidia chips, a chip tax slows them too. And this creates opportunity for international cooperation—China and Europe also understand race dynamics are suboptimal.
Cooperation mediated and verified by third parties worked for nuclear weapons during far more hostile geopolitical conditions. It’s at least worth attempting for AI before we’re past the point where cooperation remains possible.
What This Means For You: Practical Implications
After digesting 19 pages of economic theory about AI, let’s get concrete about decisions you can make now:
For Your Career:
- Identify the weak links in your job bundle—tasks that are bottlenecks and hard to automate
- Become exceptional at those bottleneck tasks
- Use AI to automate the routine components of your work, freeing time for high-value tasks
- Develop skills that complement AI rather than compete with it (judgment, relationship-building, strategic thinking, handling ambiguity)
- Stay flexible—bottlenecks will shift as AI capabilities evolve
For Your Investments:
- Time horizon matters enormously—AI’s impact unfolds over decades, not quarters
- Companies that effectively integrate AI into existing operations may outperform pure AI plays
- Healthcare and medical AI represent particularly high-value opportunities
- AI safety and alignment capabilities are undervalued relative to importance
- The distribution question—how will societies ensure broad access to AI-created abundance—will determine which companies thrive long-term
For Your Life Planning:
- Prepare for a world where meaning comes less from economic productivity and more from relationships, learning, and experiences
- Develop sources of fulfillment beyond your career
- Build financial cushions—transitions will be lumpy and unpredictable
- Stay informed about AI developments but maintain healthy skepticism about both hype and doom
- Engage politically—the policy decisions made in the next 10 years will shape the next 50
The 30-Year View: What 2055 Might Look Like
Jones expects AI’s impact to be “much larger than the internet, perhaps by more than 10x the internet, albeit over a half century or more.”
Let that sink in. The internet transformed commerce, communication, entertainment, education, dating, politics, and countless other domains. 10x that impact over 50 years represents a civilizational shift comparable to the Industrial Revolution.
But unlike the Industrial Revolution, which unfolded over 150+ years, this transformation is compressed into your lifetime—or your children’s childhood.
The world of 2055 might feature:
- Energy abundance through AI-designed fusion or other clean technologies
- Disease largely conquered through AI-driven pharmaceutical development
- Radical life extension making today’s lifespans seem quaint
- Economic output per capita 5-10x higher than today
- Work being optional for most people in developed economies
- Meaning and purpose derived from learning, creating, exploring, and relating rather than economic production
- Either catastrophic risks successfully managed through international cooperation, or… not
That last possibility is why treating the next few decades as “business as usual” while we figure things out is dangerous. The decisions we make now—about AI development pace, safety standards, international cooperation, and distribution mechanisms—will determine which version of that future we get.
The Choice We’re Making Whether We Realize It Or Not
You can’t opt out of the AI transformation. It’s happening regardless of your personal feelings about it. The only choice is whether you understand it well enough to navigate it successfully.
The weak links framework reveals why AI’s impact might be both more gradual and more profound than either enthusiasts or skeptics expect. Gradual because bottlenecks shift slowly and diffusion takes decades. Profound because eventually nearly all tasks become automatable, fundamentally changing humanity’s economic role.
The existential risk calculations show why spending heavily on AI safety is rational self-interest, not alarmism—and why we’re likely underinvesting by orders of magnitude.
The distribution challenge highlights why political engagement and policy advocacy matter for your personal future. The economic gains from AI are potentially enormous, but whether you personally benefit depends entirely on how societies choose to distribute AI-created abundance.
And the meaning question forces confrontation with deep truths about human nature: what are we when we’re not needed economically? What gives life purpose when survival is guaranteed and comfort is universal?
These aren’t abstract philosophical questions. They’re intensely practical concerns that will shape your career decisions, investment strategies, political positions, and personal development over the next 30 years.
The future isn’t arriving—it’s already here, unevenly distributed and accelerating. The question isn’t whether to engage with AI’s economic implications, but whether you’ll do so thoughtfully or get swept along by forces you don’t understand.
Understanding the weak links framework, the timeline of gradual transformation, and the policy choices that determine distribution puts you in the former category. Which is exactly where you want to be when the economy is being rebuilt from the ground up.