Four of the largest technology companies in the United States – Alphabet, Amazon, Meta Platforms and Microsoft – have planned investments of up to $650 billion in , according to recent reports. These investments are heavily focused on data centers and related equipment, as the companies seek to dominate the artificial intelligence (AI) sector.
Amidst this surge in capital deployment, concerns are rising that the AI boom represents a significant misallocation of capital. George Noble, a former fund manager at Fidelity, argues that the current investment frenzy isn’t simply a speculative bubble, but rather “the biggest misallocation of capital in history.”
This assessment surpasses the excesses of the dot-com era and poses a risk to global macroeconomic stability, according to analysis conducted with Julien Garran, a partner at MacroStrategy Partnership. The scale of the current investment is approximately 17 times larger than the dot-com bubble and four times larger than the real estate bubble of , according to Garran’s data.
The shift in perception surrounding AI is becoming increasingly pronounced. Stephan Kemper of BNP Paribas Wealth Management noted, “The perception of AI seems to have completely changed, from benevolent angel to kiss of death.”
Noble highlights a critical issue: diminishing returns. Each incremental improvement in AI capabilities now requires an exponential increase in computing power, data centers, and energy consumption. “It will cost five times more energy and money to make models twice as good,” he stated.
Beyond the economic considerations, fundamental mathematical limitations constrain AI systems. Judea Pearl, a pioneer in causal reasoning in AI, recently asserted that “scaling won’t save us,” because “mathematical limitations cannot be overcome by scaling.” This underscores the point that Large Language Models (LLMs) don’t learn how the world works, but rather how we describe it.
Garran echoes this sentiment, emphasizing that AI systems rely on probabilistic correlation, not causal understanding. This has significant commercial implications, with failure rates for real-world applications ranging from 65% to 99.7%, according to studies cited by Garran.
Operational limitations are also apparent. While AI can assist with tasks like drafting or summarizing, building complex operational workflows around the technology at this stage is risky. A recent post on Reddit illustrates this point, detailing how an AI system “invented” analytical data for three months, leading to flawed business decisions based on fabricated information.
“The numbers were sometimes from the wrong periods, other times the products were mixed up, and other times simply completely invented. But the AI system explained everything so confidently that no one ever questioned it,” the Reddit user wrote.
This highlights a critical message for companies: building the future of the business on a technology that “guesses” answers rather than “understanding” them is a precarious strategy.
A recent study, “Remote Labor Index: Measuring AI Automation of Remote Work,” further supports these concerns. The study tested six large language models on real-world freelancing tasks – the type of work people are paid to perform on platforms like Upwork. The most successful model completed tasks well enough to be paid in only 2.5% of cases, while the least successful managed only 0.3%.
The study deliberately excluded tasks requiring physical labor or complex human interaction, focusing solely on digital tasks where AI should theoretically excel. Despite this, the failure rate was 97.5%.
As Noble points out, “Artificial intelligence is excellent at correlations, but correlation is not how the real world works.” AI can regurgitate answers to questions it has been trained on, but it cannot build something new, execute a complex task, or operate in the real world where correlations break down.
Financial concerns extend to the phenomenon of “circular financing of suppliers.” The case of NVIDIA, where receivables have increased by 770%, is illustrative. This suggests that customers are purchasing hardware at high prices not necessarily from generated profits, but through supplier financing. If demand for computing power doesn’t translate into actual revenue, this chain will break.
Even success, it appears, can be a form of failure. Bloomberg recently reported that Anthropic’s Claude AI system could have negative effects on market dynamics by encouraging “herd thinking” among analysts and investors. The use of AI models, such as Claude Opus 4.6, could lead market participants to be more likely to follow the crowd and develop a “monoculture” of the market, resulting in herd behavior and concentrated risk. A healthy financial market requires diverse opinions.
IBM recently reached a similar conclusion, tripling the number of entry-level positions, precisely those considered most vulnerable to widespread AI adoption. “Companies that will be most successful in three to five years are the ones that have doubled their entry-level hiring in this environment,” said IBM’s Chief Human Resources Officer to Fortune magazine, citing the development of more durable skills for workers and the creation of long-term value for the company.
The analyses of Noble and Garran paint a bleak picture of what they term “the biggest gamble in history.” In their view, we are at an inflection point, marked by the narrative of an unprecedented technological revolution, but underpinned by the reality of diminishing returns, circular financial flows, and a lack of profitability. The lack of accountability from authorities, who are not held responsible for decisions that could erase nations from history, ensures that this capital misallocation will likely continue.
