A guest post by Cerillion
Telco investment in AI is growing faster than they are establishing the governance and assurance needed to trust it at scale.
AI is moving into core operational workflows, beginning to influence billing, service configuration and revenue recognition, with a growing ambition to automate decisions end-to-end. For Communication Service Providers (CSPs), the question is no longer whether AI can improve efficiency, but whether its decisions can be governed, explained and trusted as autonomy increases.
According to a McKinsey report from last year, nearly 8 in 10 companies are using generative AI, yet roughly the same percentage report no material contribution to earnings. The gap between adoption and impact is clear, but unsurprising. Companies aren’t going to translate AI into value unless governance, integration and operational trust are in place, and that’s just not the case.
McKinsey’s discussion of an Agentic AI Mesh framework gives a partial explanation of this. It describes how multiple AI agents operate as a coordinated network across enterprise systems, rather than as isolated tools. As these agents become autonomous actors, risk emerges from coordination failures between models, workflows, policies and accountability frameworks, leading to opaque or conflicting decisions. In regulated industries like telecoms, this coordination challenge is business-critical.
The question leaders now face is not whether to adopt agentic AI, but how to govern it safely at scale.
When AI Acts, Security Becomes a Business Problem
AI is increasingly being introduced into operational workflows, from network optimisation and service assurance to early applications in billing automation and customer lifecycle management. Most of these deployments remain supervised or limited in scope, because the decisions they inform can be commercially and legally binding.
An AI model that misprices a service or misapplies a discount introduces revenue leakage, audit exposure and regulatory risk at machine speed. As multiple AI agents begin interacting across interconnected BSS/OSS platforms, even small errors can cascade rapidly, often without clear visibility into cause and effect.
The McKinsey research notes that while horizontal AI use cases (like copilots and chatbots) are scaling, 90% of high-value, vertical AI use cases remain in pilot. In the case of CSPs, this is reflected in the fact that AI is not being deployed in mission-critical systems – the BSS and OSS where decisions have financial, regulatory and customer impact. Meanwhile, most “AI” discussed for networks in practice is closer to automation and machine learning systems rather than a GPT.
Why Traditional Security Models Fall Short
Historically, telecom security focused on infrastructure resilience, network integrity and access control. While still essential, these measures cannot protect decision integrity or process accountability, which are now critical in BSS and OSS.
Agentic AI exposes CSPs to new risks: decisions made autonomously, continuously adapting models, and actions that impact revenue and compliance. A lack of embedded governance creates significant operational and regulatory liability for CSPs, as it impairs their ability to justify decision-making.
Embedding Trust into Automation
The McKinsey report above described the security issue with AI agents as a coordination challenge, not a technical issue with any individual agent. For CSPs, this coordination challenge is most acute in BSS/OSS, where AI decisions can directly affect revenue, customer outcomes and compliance. Trust must be engineered into automation itself.
Cerillion has been looking at just this issue recently. From its perspective, responsible AI adoption is an operational challenge, not a theoretical one. It requires:
- Explainable AI across billing, credit, and customer interactions;
- Policy-driven automation, ensuring AI operates within commercial and regulatory boundaries;
- Continuous observability, so every AI action can be audited; and
- Clear accountability, even as autonomy increases.
With these requirements in place, AI does not only optimise processes, but it can even enforce contracts, apply tariffs and resolve disputes. Governance is an essential tool to turn an agent of chaos into an enforcer of rules. In that respect, the AI environment echoes the human one; after all, governance and accountability account for the difference between a street gang and a police force.
Composable AI: A Foundation for Control
As CSPs look to scale AI beyond isolated use cases, a multi-model future is emerging. Operators are beginning to combine vendor-specific tools, domain-specific models and internal engines; however, monolithic platforms risk creating blind spots.
For Cerillion, composable AI integration is the solution. This is, fundamentally, a ‘lego brick’ approach to AI which limits the authority of individual agents and lets you swap or evolve models without destabilising critical processes.
Meanwhile, the overall architecture that individual AI models and agents are integrated into should set out the governance framework, which means you end up with consistent governance across diverse AI models and you can maintain auditability even as AI autonomy grows.
A composable AI approach enables CSPs to scale innovation without surrendering control. It enables CSPs to adopt multiple models while maintaining security, auditability and operational control, rather than locking into a single provider or brittle architecture.
Closing the Gap: Security as the Enabler of AI Scale
It’s fairly clear that, in an industry where ‘move fast and break things’ just doesn’t work, governance is not a constraint; it is an enabler of sustainable AI adoption. AI maturity is measured not by the number of models deployed, but by the ability to govern autonomous decisions across complex, interconnected systems. Trust, transparency and accountability are now as important as efficiency and performance.
For CSPs, this requires a fundamental shift: from protecting systems to embedding governance directly into AI-driven operations.
The future belongs to organisations that can coordinate AI agents at scale, maintaining operational trust while unlocking the benefits of automation. In telecoms, automation may be inevitable, but trusted automation is a strategic choice.
