AI Datacentres: The Acceleration Crisis Reshaping Telecoms Infrastructure

Avatar photo

The artificial intelligence revolution is fundamentally altering the telecommunications landscape at a pace that dwarfs even the dot-com boom. The telecoms industry has weathered transformative periods, such as the explosive growth of internet usage in the 1990s to the mobile data surge of the 2000s, but the current expansion of AI-driven datacentres represents an acceleration crisis that challenges conventional infrastructure planning.  

Demand for AI-ready datacentre capacity could rise at 33% per year between 2023 and 2030, a growth trajectory that makes previous technology booms appear measured by comparison. 

Unprecedented Velocity 

During the dot-com era, datacentre capacity grew steadily but predictably. According to Datareportal, the internet user base expanded from 361 million in 2000 to 1.8 billion by 2010—substantial growth, but spread across a decade. In contrast, AI datacentre capacity faces a compressed timeline. 

McKinsey analysis suggests that demand for AI-ready datacentre capacity will rise at 33% annually between 2023 and 2030, while the AI datacentre market is projected to grow at 28.3% CAGR through 2030, significantly outpacing traditional datacentres. This acceleration means 33% of global datacentre capacity will be dedicated to AI by 2025, reaching 70% by 2030. 

To contextualise this velocity: the Datacentre Physical Infrastructure market grew 17% year-on-year in Q1 2025 alone. Goldman Sachs forecasts global power demand from datacentres will increase 50% by 2027 and by as much as 165% by the end of the decade.  

Such concentrated growth within a seven-year window has no precedent in telecoms infrastructure history. 

Meanwhile, the proportion of energy used in datacentres has been growing fast in its core geographies. Stanford estimates that in the US it rose from 1.9% of the country’s energy usage in 2018 to 2.4% in 2022, and forecasts usage of between 6.7% and 12% by 2028.

That’s a huge additional demand for electricity for grids to bear. 

The scale becomes staggering when examined at facility level. Modern hyperscale datacentres demand 100 MW or more, equivalent to powering 350,000 – 400,000 electric vehicles annually. Goldman Sachs estimates Europe’s datacentre pipeline at 170 GW, representing one-third of the region’s entire power consumption.  

To those who scoff at this scale and speed of growth as unfeasible, there are suggestions that hyperscalers are willing to accept more risks in order to deliver rapid datacentre rollouts.  

Regional Variations and Geopolitical Implications 

That said, the geographic distribution of AI datacentre development reveals strategic imbalances that carry significant geopolitical implications.  

China’s aggressive datacentre expansion, coupled with the US focus on maintaining technological sovereignty, is creating regional infrastructure clusters that may fragment global connectivity patterns.  

While the US and EU seem to be focussing on huge expansions in energy capacity, however, China’s AI developers such as Deepseek have been working at much less energy-intensive AI training and operation.  

The European Union has some difficult problems to conquer. As a laggard compared to the US and China, it has initiatives to develop sovereign AI capabilities. At the same time, as outlined above, there are huge demands already in place for new datacentres and new generation capacity.  

India’s emerging role as a data processing hub presents both opportunity and challenge for global telecoms operators.  

The subcontinent’s relatively lower energy costs and growing technical workforce make it attractive, but again, it’s not yet a major hub for AI infrastructure investments. NTT Data and others are investing billions to triple India’s capacity, but with potentially 3GW on the cards by 2030, it’s dwarfed even by the EU. 

Network Infrastructure Under Pressure 

AI datacentres impose unique demands on telecoms infrastructure that extend far beyond traditional datacentre connectivity requirements. The massive parallel processing requirements of AI workloads generate unprecedented east-west traffic patterns within datacentres.  

However, AI training operations often span multiple facilities, requiring dedicated high-capacity links with guaranteed latency characteristics. This creates opportunities for operators to develop specialised AI interconnection services, but requires significant investment in dedicated fibre infrastructure. 

AI inferencing is liable to encourage new network topology challenges. With GPU costs running at thousands of dollars, running AI within end-user devices at any scale will need edge nodes capable of managing that inferencing.  

However, unlike “traditional” edge computing, AI edge nodes require real-time synchronisation with centralised training facilities, creating bidirectional traffic patterns that are quite different from what existing infrastructure is dimensioned for.   

To jump on this bandwagon implies architecting networks capable of supporting both high-bandwidth data ingestion and low-latency inference delivery across geographically distributed edge nodes. 

The Power to Change  

What’s more, service providers are spending 60% of datacentre costs on electricity already. As AI is much more energy demanding, the economics of this may push providers towards a better understanding of energy management and new relationships with power suppliers.  

Plus, unlike traditional datacentre operations with predictable load patterns, AI training cycles can create sudden demand spikes that stress both power infrastructure and cooling systems. These variations require more sophisticated power management systems and may necessitate energy storage capabilities to smooth demand peaks. 

So what? 

The AI datacentre boom presents both opportunity and risk for telecoms operators.  

Traditional telecoms business models face disruption as AI workloads create new service categories.  

Some operators, for example, may evolve from pure connectivity providers to integrated infrastructure partners, offering bundled solutions that address power, cooling and networking requirements.  

Others may leverage their existing, relatively distributed network architectures and local exchanges to host mini-datacentres as a service. However, for providers who have been divesting their physical infrastructure, however, this would be a difficult pivot to say the least. 

The geographic concentration of AI capabilities also creates new competitive dynamics. Operators serving regions with limited AI infrastructure may find themselves relegated to connectivity providers acting as the pipes for AI elsewhere. 

The convergence of AI computing demands with existing telecoms infrastructure limitations is creating an interesting vector of potential transformation, either to support centralised datacentres or to enable AI-powered devices. Telcos that view it merely as increased demand for existing services risk marginalisation in an increasingly AI-centric telecommunications landscape. 

The AI datacentre revolution represents more than a capacity upgrade—it demands a fundamental rethinking of how telecoms infrastructure supports computing workloads. 

Total
0
Shares
Previous Post

CTO of Garuda Robotics Praises Drones as Use Cases: ‘They Need Priority’

Next Post

6G Will Usher In a New Era in Healthcare, Researchers Say