There is a moment in every technology boom where what people say diverges completely from what they do. We are living in that moment right now with artificial intelligence.

I have been watching this unfold with genuine fascination, particularly since October 2025, when Bank of America’s Global Fund Manager Survey recorded something remarkable. Fifty-four percent of institutional investors stated explicitly that AI stocks are in a bubble. These aren’t retail traders chasing meme stocks on Reddit; these are pension fund managers, hedge fund strategists, and allocators responsible for trillions in assets. But here is what makes this interesting: those same investors, in that exact same survey, pushed their equity allocations to thirty-two percent overweight, the highest level in eight months.

Cash reserves dropped to levels that typically trigger contrarian sell signals. Even as “AI bubble risk” became the number one tail risk for the first time in survey history—overtaking inflation, geopolitics, and recession concerns—the buying continued. Institutional finance has arrived at a peculiar conclusion: this is probably a bubble, we should probably be concerned, but we are going to keep allocating capital to it anyway. The contradiction isn’t subtle; it is the defining characteristic of this moment.

The internal divisions within Wall Street’s most prestigious firms have become almost comical in their starkness. Goldman Sachs offers the clearest example. In June 2024, Jim Covello, their Head of Global Equity Research, published a warning arguing that AI is exceptionally expensive and meant to solve complex problems that it isn’t actually designed to solve. He asked the critical question nobody wanted to answer: “What trillion-dollar problem will AI solve?”. His analysis showed that while AI could update historical data in company models faster, it did so at six times the cost of manual methods. He warned that overbuilding things the world doesn’t have use for typically ends badly.

Yet, this skepticism coexists with unbridled optimism from the same firm’s economic team. Joseph Briggs has consistently maintained a thesis—originating in 2023 and reiterated throughout the bubble—estimating eight trillion dollars in value unlocked by AI productivity gains. While Covello was dismantling the ROI argument, Briggs continued to project that AI would automate twenty-five percent of all work tasks and raise US productivity by nine percent. One side of the bank warns the technology is too expensive to work; the other side models a future where it transforms the entire global economy. Two research heads, same institution, same data access, but completely opposite conclusions.

This schizophrenia repeats everywhere. Morgan Stanley’s Lisa Shalett warned we are “closer to the seventh inning than the first” and predicted a potential correction, while her own firm’s thematic investing team estimated AI could generate nearly a trillion dollars in net annual benefits for S&P 500 companies. Ray Dalio evolved from measured optimism to declaring the cycle looks “very similar to 1999”, while Andreessen Horowitz raised another twenty billion dollars for AI startups. Even Sam Altman admitted investors might be “overexcited” while simultaneously outlining plans to spend trillions on data centers.

The numbers themselves tell a story of extremes that defy easy categorization. Hyperscalers have committed six hundred billion dollars in projected capital expenditure for 2026. Nvidia’s net margin reached fifty-three percent. These aren’t the profitless startups of the dot-com era; these are the most profitable companies in human history. Yet, the investment-to-revenue gap provides the strongest argument for bubble dynamics. Tech analyst Ed Zitron calculated that the major tech giants have invested approximately five hundred sixty billion dollars in AI infrastructure to generate roughly thirty-five billion in revenue—a sixteen-to-one ratio. Sequoia Capital identified a six hundred billion dollar annual revenue gap between infrastructure spending and actual monetization.

What strikes me most about this moment isn’t the disagreement about valuation, or the market concentration where the top five stocks represent thirty percent of the S&P 500. It is the cognitive dissonance within the investors themselves. The same people expressing bubble concerns are the ones allocating record capital. This isn’t hypocrisy exactly. It’s something more interesting.

I think what we are witnessing is genuine uncertainty about something genuinely unprecedented. There are critical differences separating this boom from the dot-com crash. First, profitability is real: unlike Cisco in 1999, Nvidia and Microsoft are generating tens of billions in free cash flow. Second, technology maturity: seventy-eight percent of enterprises report AI implementation in 2024, compared to the slow dial-up adoption of the late nineties. And third, Federal Reserve policy: while the Fed pricked the dot-com bubble by raising rates, today they have been cutting them, creating conditions that support asset valuations.

But concerning parallels exist. The circular financing patterns—where Nvidia invests in startups that use those funds to buy Nvidia chips—echo the vendor financing schemes that collapsed in 2001. The infrastructure overbuilding feels eerily similar to the miles of “dark fiber” laid in the nineties that remained unused for years.

Howard Marks of Oaktree Capital, a legendary investor known for identifying bubbles, offered perhaps the wisest assessment. He noted that while he doesn’t detect the level of “psychological excess” typical of a mania, there is a distinction that matters: “I think there’s a near one hundred percent probability AI will change the world but there’s much less than one hundred percent probability that investing in any given AI company or sector today will be profitable”.

The internet did transform the global economy exactly as promised, yet the dot-com bubble still destroyed trillions in wealth. The fifty-four percent of institutional investors who call this a bubble while buying anyway aren’t being irrational. They are navigating a game where the fear of missing the next technological revolution exceeds the fear of a crash. They are operating with eyes wide open, acknowledging the contradiction, and proceeding regardless.

The question isn’t whether AI is a bubble. The question is whether being aware that something might be a bubble changes the outcome at all. Based on what I’m seeing, the answer appears to be no. The party continues. Everyone knows it will end. But nobody wants to leave early.

And maybe that’s the most honest thing anyone can say about this moment.