The telecoms industry’s AI journey has evolved rapidly from the generative AI fascination of 2023 to today’s focus on agentic frameworks, but the market still lacks clarity on when to deploy which AI technologies for specific use cases.
Kailem Anderson, CMO of Ciena’s Blue Planet division, offered a candid assessment of where the industry stands and the fundamental challenges that persist beneath the AI hype.
The AI Maturation Curve
“Last year, everyone was wrapped around the axle on generative AI, right? And this year it’s agentic AI,” Anderson observed in an interview with TelcoForge. “I don’t think the market has clarity in terms of when to use what AI for what function.”
This confusion has created what Anderson describes as “buzzword bingo,” where operators are applying inappropriate AI technologies to problems that might be better solved with traditional approaches. His solution is pragmatic:
“All the modes of AI and ML are relevant. What is most important with any AI is making sure you’ve got access to the right data source and that you’ve got clean, stateful data as a foundation.”
Blue Planet’s approach centres on an AI studio development environment that avoids vendor lock-in on specific large language models (LLMs).
“Why should you be locked in on an LLM from a particular vendor?” Anderson asks. “We believe that it should be a development environment that sits over the top of the OSS where you can choose your LLM, you can deploy it to the data source of choice.”
The strategy recognises that different AI approaches suit different problems. Traditional machine learning remains superior for predictive analytics, where systems can be trained on historical failure patterns, while generative AI excels at tasks like GUI display builders. Meanwhile, agentic AI offers the promise of making heavyweight OSS applications “much, much lighter weight” through orchestrated agent frameworks.
The Persistent Data Challenge
Perhaps the most striking aspect of Anderson’s assessment is that fundamental data management issues continue to plague AI deployments. Despite decades of discussion about data quality, data lakes, and data visibility, operators still struggle with basics that one imagines should have been resolved long ago.
“It takes them six months to deploy an AI use case. You know, four months of that six months is getting the clean data every time,” Anderson revealed.
The problem isn’t just data quality—it’s data freshness and accessibility.
“They don’t know where the data store is. Then they find out where the data store is, and they say, “Okay, I want to move the data from over here into a centralized data lake.” And as soon as they do that, the data becomes stale.”
This creates a vicious cycle where operators apply AI algorithms to outdated information, undermining the entire value proposition. Anderson’s solution challenges conventional wisdom about data lake architectures:
“Don’t create another data silo. As soon as you do that, you’re chasing the problem, and you’re having to constantly go cleanse the data and re-cleanse it and re-cleanse it into a centralized data lake strategy.”
Instead, he advocates for applying AI “much, much closer to the source” using real-time, stateful data stores that update continuously. This approach becomes critical when dealing with the scale challenges facing modern operators.
The Scale Imperative
The scale challenges facing modern networks make centralised data approaches increasingly untenable. Anderson illustrated this with a real-world example from Charter Communications, where Blue Planet supports 600,000 devices across their business services and core network.
“You try pulling data from 600,000 devices into a centralised data store. You will choke the network doing that when you’re polling the environment every few minutes,” Anderson explained. The solution required implementing “regional nodes across POPs that would collect the data from them, store them, and then basically create a data mesh… that could communicate with these nodes.”
This distributed approach reflects a broader industry reality. “You can’t centralise all your data in a highly distributed world right now, whether that is for a service assurance use case, whether that is for a 5G RAN use case, whether it’s for a slicing use case that spans across transport, packet core, and RAN.”
The implications extend beyond technical architecture to business strategy.
“The companies that get good at data management and dealing with distributed data are going to be hugely valuable in the future,” Anderson predicted, “because you’re going to be processing data in a distributed way and then have to stitch it together through federation and other concepts.”
Network Slicing: From Aspiration to Reality
Anderson’s perspective on network slicing reveals an industry finally moving from proof of concept to commercial deployment, though monetisation strategies remain unclear.
“5 years ago I was talking to CTO teams only, who were aspirationally looking to implement that. About 18 months ago, line of business started to do proof of concepts. And now everyone’s talking about the actual rollout of slicing.”
The drivers are competitive rather than strategic.
“They still don’t really have clarity in terms of what their monetisation strategy is, but they’ve figured out that they have to get ahead of it because their competitors are starting to deploy slicing themselves.”
This competitive pressure is forcing operators to “figure out what use case is going to drive monetisation on the fly,” with only a few clear examples emerging. Anderson cites Deutsche Telekom and Telefonica Germany as having identified viable monetization through video broadcast applications for slicing.
The Complexity Challenge
The move toward distributed architectures and multi-domain services creates unprecedented complexity. Network slicing exemplifies this challenge, as Anderson explains:
“Even within a telco environment, as soon as you’re building a solution that traverses multiple networks, there’s complexity. You’ve got stitching that you need to do between those networks.”
This complexity multiplies when considering edge computing requirements and carrier interconnection scenarios.
“Being able to orchestrate into carrier-type use cases where you’re orchestrating capacity potentially on someone else’s edge adds more complexity,” Anderson noted.
The industry’s response has been to embrace standards like MEF for inter-carrier communications, but Anderson acknowledges the inherent challenges:
“A lot more complexity and traversing multiple internal networks, and I think that complexity is only going to increase as you start delivering more and more low-latency services on the edge of the network.”
Looking Ahead
Anderson’s assessment suggests an industry at an inflection point, where the hype around AI is giving way to practical implementation challenges that require fundamental changes in how operators approach network architecture and data management.
The companies that succeed will be those that can navigate the complexity of distributed systems while maintaining the agility needed for rapid AI deployment.
“Education is paramount to making sure that the telcos aren’t making the wrong decisions,” Anderson concluded.
“There’s a bit of a journey to go on that, I believe.”