AI reality check – Is the bubble in trouble?

Avatar photo

Analyst’s note. All images in this piece were generated with an AI image app.

Unless you have been living under a rock, or on some isolated mountain somewhere in the Andes, you must know artificial intelligence is the most talked-about technical subject on nearly everyone’s lips. With the hyperbole accompanying it, it is being packaged as the solution to nearly everything across just about any ecosystem – from infrastructure to space and all points in-between.

For example, at the recent Nvidia GTC 2025, Nokia and SoftBank, among others, went all out over AI-RAN. Supposedly, AI benefits the RAN by introducing AI capability to its radio access nodes. And leaders of these companies seem to think that weaving AI into the core will create “revolutionary” next-generation, AI-native wireless networks. This is just one example, there are many more out there.

A report titled ‘The GenAI Divide: State of AI in Business 2025’ was recently published by top MITs research hub. “Despite $30 – 40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return,” opens the report’s executive summary.

The measure of return is defined as P&L impact, indicating that while AI tools may enhance individual productivity, this enhancement does not necessarily translate to improved bottom-line results. Moreover, many enterprise-grade AI systems are underperforming because of inflexible workflows, insufficient contextual learning, and a disconnect with daily operations.

Disappointing ChatGPT-5

When chat GPT was launched, it too attracted a lot of interest. Similarly with agentic AI.

But as the noise grew louder regarding its lack of performance, the possible bursting of the AI hype bubble was further ruled. As well, the underwhelming launch of the latest major version of OpenAI’s LLM has prompted a wave of commentary about the diminishing returns on AI investments in general. So far, the bubble is still standing, but there are some serious dings happening. There are more noticeable signs. MIT’s “State of AI in Business 2025” paper has some rather startling statistics. Although 40% of companies claimed to have implemented AI tools, just 5% could successfully incorporate them into workflows at scale. Pilot purgatory is where most ventures end.

And Deutsche Bank warned that spending can’t continue to increase exponentially as it has. The analysts warn that if expenditure were to slow down without fulfilling tech’s extravagant promises, it might expose an economy in ruins, characterized by inflation, falling household incomes, and unemployment, concealed by an excessive belief in AI’s potential. Mid-October saw Asian equities following a mixed week that saw huge new AI investments play off against the US shutdown and concerns about a tech bubble.

Even The New York Times is writing about this (October 11, 2025, opinion today).

Similarly, investors are now shorting AI stocks on the belief that generative AI’s big enterprise moment is already sputtering down, and headlines are warning that the AI bubble may be ready to burst.”

Such issuances are popping up more frequently. But this is not a recent phenomenon but has been gaining traction.

Sure, right now it feels like AI is everywhere. It has seen a meteoric rise to the top of the hype rollercoaster. However, these and other disappointments signal that it may now be descending toward the nadir of disillusionment.

 Earlier Warnings

Several years ago, concerns about AGI led 1,000 technology leaders and AI researchers to sign an open letter requesting a temporary halt on the development of new AI models. The proposed pause did not occur, nor has AGI really been achieved. But this is a classic example of the boom/bust hype cycle that seems to be the norm for much of technology.

We all know AI has been highlighted to do just about anything. But, pushing aside the hype that it is the solution to every problem, the bottom-line objective of AI is to boost productivity and cut costs, whatever that looks like to whomever. And beyond increasing efficiency and reducing costs, AI should also be able to deliver additional sources of income and verticals.

To be fair, the first part of boosting productivity and reducing costs has met with some early success; limited but nonetheless, success. However, the second part, about delivering sources of income and verticals, is stuck. With few exceptions, AI has yet to deliver just some basic, real world, vertical use cases.

Is AI getting away from us?

The harbinger of doubt from the various segments is growing louder. All along, some have said that AI is not fully understood and we are not fully aware of how the AI disruption will eventually unfold. Now, some of the industry players and analysts’ cautionary voices are growing louder amid escalating apprehensions regarding this rapid and extensive deployment of it.

One of the chief concerns is that we are losing control. If we do not apply some brakes, the scenario of the Terminator is possible, even through the veil of denial. That concern arises from the fact that if we don’t reel it in some, it could take off in a direction and start creating its own algorithms and self-develop. There is worry that this self-derived code cannot be fully comprehended by even the brightest minds. This opacity can lead to a lack of control. Already, some code has been self-developed that is hard to unpack and difficult to understand why the system arrived at a particular decision.

In fact, Sam Altman has been warning about this for a while now. And for good reason. The overarching one is the lack of transparency and explainability. What that really means is that AI models, especially deep learning models, can be so complex and difficult to understand, even for experts.

The current era of pervasive artificial intelligence (AI) technologies, particularly generative AI based on large language models (LLMs) like ChatGPT, Copilot, and Gemini, presents numerous claims to evaluate.

Along with that come claims that AI will change the world. That is certainly true, and will ratchet up once quantum computing goes mainstream. But that is decades out.

And Geoff Hinton, AI guru, computer scientist and Nobel laureate, has also been somewhat vocal about this. He noted that there is a 10–20% chance AI will lead to human extinction within the next three decades. However, no one is really offering and scenarios of how that might come about.

Other prominent experts, public leaders, and scholars have cautioned that “mitigating the risk of extinction from AI” must be a global priority. Surveys of machine learning researchers estimate the median probability of extinction-level events occurring during this century to be approximately 5%. The primary concern is that it will break out of the edges if we do not keep it well contained.

But respectable voices vary. For example, some say AI is simply engaged in sophisticated pattern-matching that simulates intelligence through prediction. True, of course, and that prediction becomes more and more accurate as it gobbles up data. But it cannot think intelligently like humans, and never will, without some sort of human-machine mind meld (as portrayed in several Star Trek episodes and movies). This is a pretty solid argument. But others disagree.

Another position is that GenAI may eventually possess general purpose capabilities that can cause errors at scale. This implies that the more capable AI becomes, the more it can also erode human control over it.

However, the one worry that is very real is the militarization of AI, where all guardrails are purposely removed to make it as powerful as possible. This has been done over and over with technology and has led to near disasters in the past. This is a genuine threat, perhaps the only real threat.

With AI natively integrated into weapons systems, human control would be diminished. Think of the film “War games” where this is the exact scenario that nearly led to a global nuclear war.

Integrating AI into future weapons would reduce human control and lead to an arms race, perhaps even risk an AI-driven world war.

Yet, others believe that a superintelligent AI (ASI) capable of wreaking global devastation won’t be coming anytime soon. Numerous specialists contend that ASI, while remarkable in its own right, is fundamentally constrained. For instance, huge language models exhibit an unreliable capacity for logical, factual, and conceptual comprehension. In these essential aspects, AI is incapable of comprehension and, hence, cannot function as humans do.

However, even the naysayers have some doubts. It is quite possible that a future technological breakthrough enabling AI to overcome its present limitations cannot be ruled out. Nor can it be assumed. Overemphasizing speculative threats of ASI risks distracting us from AI’s actual harms today, such as biased automated decision-making. It could also deflect from genuine existential dangers such as climate change – a danger to which energy-hungry AI may itself contribute.

AI disillusionment

AI is not the solution many of its proponents claim it to be. The principal impediment to growth learning is neither infrastructure, legislation, nor talent. The report indicates, “The bulk of GenAI systems are deficient in retaining feedback, adapting to contextual variations, or improving their performance over time.” In conclusion, although LLMs are advanced and efficient, unlike humans they are uniform models that lack the ability to learn from or adjust to their surroundings.

The developments in wireless technology and cellphones have facilitated the emergence of a slew of on-demand healthcare services, supported by tracking applications and search platforms. This signifies a novel approach in any number of services through remote interactions that are accessible at all times and from any location.

Let’s talk security

Nonetheless, technological innovation and implementation in many of these sectors has opened some Pandora’s boxes. As technology progresses, the danger of cybersecurity risks increases proportionally.

Cybersecurity involves safeguarding electronic information and digital assets from unlawful access, utilization, and disclosure by hackers, cybercriminals, and malevolent entities. While it has come a long way so far, comprehensive cybersecurity is somewhat of an elusive target, considering the daily database breaches and other attacks.

Cybersecurity risks remain a pervasive concern across all sectors, as cybercriminals consistently devise and use novel tactics. As corporations design and manage complex, interconnected systems, new weak touchpoints emerge that hackers can exploit. AI will certainly enhance the existing bucket of attacks by bad actors, typically; man-in-the-middle, data and identity theft, social engineering, phishing emails or impersonation, denial of service, botnets, deepfake technology and others.

AI is an excellent tool for enhanced security. But the early excitement of it being able to get ahead of cybersecurity threats much more effectively than other methods has diminished. Primarily because of its dual-use characteristic. While cybersecurity teams utilize AI to detect and mitigate risks, adversaries also employ AI to enhance and develop their nefarious tactics using AI-generated phishing emails, automated malware, social engineering and identity theft. Such AI-enabled threats are more complex to detect because they evolve faster than rule-based systems can respond.

As well, peripheral factors such as overreliance on AI reduces human vigilance, AI is only as good as the data it learns from – the old “garbage in, garbage out” is extremely pertinent and means that vetting training data is critical, the high cost of implementation and maintenance – self-explanatory.

AI can introduce new attack surfaces – vulnerabilities exist in APIs. Models can be poisoned, and adversarial machine learning can be created. Furthermore, there are ethical and legal concerns – mostly around privacy and security, lack of standardization and regulation, and difficulty in testing and validation. This means systems are being deployed prematurely and not fully validated.

Unpacking it all

What I have written so far is just the tip of the iceberg. Drilling down, we find many more shortcomings and reasons for the bubble and other concerns.

There is no doubt the current AI era is proving transformative to some extent. But just how revolutionary it is, or will be, is still unknown, so the cautions from Sam Altman and others should be heard with increasing validity.

The question now arising is, has AI hit the knee? What if we are at a plateau where AI will evolve horizontally, or even stall? There are signs such as spending on AI appears to be slowing down. There is also talk of the imminent arrival of human-like AGI, or even super-intelligence, being a lot farther off than has been hyped. This has many questioning if continuing to throw resources at the current LLM paradigm will result in AGI. Failure to rein in the hype will simply add fuel to the bubble instability.

Other mitigating factors include the lack of standardization, resources, trained individuals, failures with attempted deployments, lack of policy at organizations, unpredictable ROI, and hardware/implementation costs. As well, many solutions are still not fully vetted. In the rush to get AI systems out, vendors have hopscotched over deep vetting of code and cybersecurity.

There is a lot going on in this space beyond what I have discussed here. The early hubris around AI resembles the same old song that came along with 5G, the IoT, 6G and so many other platforms and technologies.

The fact is that AI will be all that it can be eventually. We just do not know how to corral all of its mysteries. And if we don’t reel it in and understand exactly how it is developing and keep the leash on, all kinds of scenarios can play out, including the runaway case.

Total
0
Shares
Previous Post

Guest Post: Regulating Telecoms in the AI Era: Data Rights, Privacy, and Fair Access