AI-driven spectrum management is getting real. The network API economy is maturing. But nobody is connecting the two. We’re missing a trick.
Joy Taylor, Chief Product Officer, TelcoForge
Spend a week at any major telecom conference and you will hear two sets of conversations that never seem to meet. The first is about AI in the radio access network: Transformer models replacing classical DSP, reinforcement learning agents managing interference across cell sites, GPU-based baseband processing that can be upgraded like software. The second is about network APIs: CAMARA meta-releases, GSMA Open Gateway certifications, the promise that enterprises will finally be able to programme the network like they programme the cloud. Both conversations are exciting and technically grounded. But they are happening in separate rooms, to separate audiences, with separate commercial logics. The gap between them is where a significant opportunity is being missed.
I have spent most of my time thinking about what enterprise customers actually need from telecom networks, and how to build products that deliver it. From that vantage point, the disconnect is stark. The AI-RAN community is producing genuinely impressive capabilities in spectrum efficiency and dynamic resource allocation, but it frames everything in terms of operator efficiency or cost savings. Meanwhile the API community is building the scaffolding for programmable networks, but with a focus on things such as identity verification, QoS profiles, and location services. Neither side is asking what seems to me the obvious question: when do AI-driven spectrum capabilities become something an enterprise customer can consume?
What is genuinely new in AI-RAN, and what is rebranding
A large portion of what now travels under the AI-RAN banner is work that the industry has been doing for over a decade under different names. Self-Organising Networks automated neighbour relations and handover optimisation in the LTE era. Rule-based SON evolved into ML-assisted SON. The O-RAN Alliance’s RAN Intelligent Controller introduced xApps and rApps that use machine learning for near-real-time and non-real-time RIC functions. Relabelling these activities as “AI” may be good marketing, but it does not represent a paradigm shift, and telecom professionals should be appropriately sceptical when vendors claim otherwise.
That said, there are developments in the past twelve months that go beyond relabelling. SoftBank’s work with Transformer-based signal processing is the most concrete. In August 2025, they demonstrated a Transformer architecture for uplink channel interpolation that delivered approximately 30% throughput improvement over conventional methods in a live, 3GPP-compliant over-the-air environment. The processing latency was 26% faster than the CNN-based approach they had previously tested.
While this ‘tech talk’ might send Sales or Marketing people into a coma, it matters because real-time 5G signal processing requires sub-millisecond execution. This is not a minor refinement of an existing optimisation algorithm, it’s using the same kind of architecture as Large Language Models in a live network, which is a major qualitative step forward. SoftBank is right to call it a move from concept to practical implementation.
Separately, in October MITRE announced an AI-native spectrum agility prototype for 5G and 6G that enables real-time AI-driven interference avoidance and spectrum maximisation. The US Department of Defense ran its first large-scale dynamic spectrum sharing demo in late 2025, specifically testing how AI can enable commercial and military users to coexist in shared bands without the static allocation models that characterise current CBRS deployments.
These represent a shift toward AI systems that can make real-time decisions about spectrum allocation, interference management, and resource scheduling with a sophistication that traditional algorithmic approaches cannot match. The AI-RAN Alliance’s growth to over 100 members across 17 countries reflects genuine industry conviction that this direction is right, even if the timeline to widespread commercial deployment remains uncertain.
The determinism question
One tension within the AI-RAN community deserves more attention than it receives. There is a real and unresolved debate about how much autonomy AI systems should have in making spectrum decisions, and it maps directly onto the broader industry anxiety about explainability and control.
Traditional spectrum management is deterministic. An operator configures parameters, the SON applies rules, the outcomes are predictable and auditable. When a reinforcement learning agent starts making dynamic spectrum allocation decisions in real time, the operator gains efficiency but loses the ability to explain precisely why a given decision was made. For an operator managing its own network, that trade-off may be acceptable; SoftBank’s results suggest the performance gains justify the reduced transparency. But the moment you start exposing these capabilities to third parties, the explainability problem becomes a commercial and regulatory one. An enterprise customer running a private 5G network for warehouse automation needs to understand why their AGV lost connectivity in Zone 3 at 14:07 on a Tuesday. “The AI decided to reallocate spectrum” is not an answer that survives an SLA dispute.
The AI-RAN Alliance is aware of this. Their emphasis on testing methodologies and benchmarking, and their Data-for-AI initiative focused on training data quality and representativeness, reflect an effort to build the foundation that deterministic outcomes require. But the fundamental tension remains: the performance gains from AI-driven spectrum management come precisely from the system’s ability to make decisions that a rule-based system would not, and those decisions are inherently harder to predict and explain.
Meanwhile, in the other room: the network API economy arrives
The GSMA Open Gateway initiative and the CAMARA open-source project reached an important point in 2025. The Fall 2025 CAMARA meta-release brought the total to 60 APIs, with conformance certification now jointly managed by GSMA and TM Forum. Aduna launched as an API aggregation platform with Ericsson and a dozen operators behind it. Revenue forecasts range from Juniper’s $8 billion to STL Partners’ $31 billion by 2030, with identity APIs (SIM Swap, Number Verification) driving the near-term revenue case.
For enterprise developers, the promise is compelling: standardised, programmable access to network capabilities without needing to understand the underlying telecom infrastructure. As the GSMA’s Henry Calvert put it, the goal is “a simple, programmable network” where enterprises “are in control” and “can provision and consume services to deliver better experiences to their customers.” The analogy to cloud APIs is deliberate and apt. AWS did not succeed by explaining EC2’s hypervisor architecture to application developers. It succeeded by abstracting compute into an API call.
But look at the actual CAMARA API catalogue. Authentication and fraud prevention; Location services; Communication quality; Mobile payments; Computing services. These are useful capabilities and some of them are genuinely novel capabilities for enterprise developers. But nothing in the catalogue touches spectrum management, dynamic resource allocation, or interference intelligence. Some of the most technically ambitious work happening in telecom AI has no representation in the enterprise-facing API ecosystem.
The missing middle
This matters more than it might seem at first glance because of the growth in enterprise private networks.
CBRS-based private 5G deployments in the US are growing steadily across warehousing, logistics, manufacturing, ports, healthcare, and mining. Equivalent frameworks exist in South Korea, Japan, Germany, and the UK. Enterprise customers are taking the significant step of operating their own cellular infrastructure. They care deeply about coverage reliability, interference management, and spectrum efficiency because of the business outcomes they deliver, but they are mostly consuming spectrum through static allocation mechanisms. In the US, CBRS Spectrum Access System requests are still calculated using static service demand models, and processing can take 24 hours or more. As CBRS adoption grows and spectrum becomes more congested, this static model will increasingly struggle.
Now consider what AI-RAN could offer these customers if the capability were properly abstracted and exposed. Not “Transformer-based SRS prediction”; nobody running a warehouse needs to know about sounding reference signals. Rather, something like a “spectrum intelligence API” that allows an enterprise to declare its connectivity requirements (coverage area, latency budget, throughput floor, time window) and have the AI-driven spectrum management system figure out the optimal allocation, interference mitigation, and resource scheduling to meet those requirements. If conditions change – for example, a new user activates nearby, or Navy radar exercises trigger incumbent protection – the system adapts in real time and the enterprise receives notification through a subscription-based event model. The enterprise gets what it actually wants: predictable, programmable connectivity. The operator (or whatever intermediary manages the shared spectrum) retains control of the underlying spectrum decisions.
This would be a fundamentally different product from anything in the current CAMARA catalogue. Quality on Demand lets you request a QoS profile for a session; a spectrum intelligence API would let you declare an outcome and delegate the physical-layer optimisation to an AI system that understands the local RF environment in ways no static allocation model can. It’s the same difference a company sees between reserving a fixed amount of cloud compute and using auto-scaling: one is a manual resource request, the other is an intent-based service that adapts to demand.
Why this has not happened yet
There are several very real obstacles.
Regulatory frameworks are the most obvious. Spectrum governance was designed for a world of static allocations and clear licence tiers. The FCC’s CBRS three-tier model (Incumbent, PAL, GAA) is already the most dynamic commercial spectrum sharing regime in the world, and Dean Bubley is probably right that the US is the only country with proper dynamic access including a sensing function and database management. Extending this to real-time, AI-driven allocation decisions made via API introduces questions that regulators have not yet grappled with. Who is liable when an AI-driven spectrum decision causes interference? How do you audit decisions made by a neural network in 338 microseconds? The NTIA’s Advanced Dynamic Spectrum Sharing demonstration in late 2025 is explicitly trying to answer some of these questions, but the policy framework lags the technology by years.
The commercial model is equally unresolved. Today, CBRS spectrum is either licensed (PAL, purchased at auction, ten-year leases) or unlicensed (GAA, free, no guarantees). An AI-driven spectrum intelligence service would sit awkwardly between these models, offering something better than GAA’s best-effort access but more flexible than PAL’s fixed allocation. Pricing such a service requires quantifying the value of AI-optimised spectrum access versus static access, and the industry does not yet have the benchmarks or the data to do this confidently. The risk is that operators price it as a premium and enterprises reject the cost, or that aggregators commoditise it before operators capture the margin.
Then there is the data problem. AI-driven spectrum management requires detailed, real-time data about the RF environment: interference patterns, usage loads, propagation conditions, device density, mobility patterns. Some of this data sits with the operator, some with the Spectrum Access System [SAS] provider such as Google or Federated Wireless, some with the enterprise’s own network infrastructure. Creating the unified data layer that an AI system needs to make good decisions across operator-enterprise boundaries raises governance questions that mirror the broader challenges of industrial data sharing.
Finally, the determinism problem I described earlier becomes pressing when you add an enterprise SLA into the equation. Operators might tolerate some opacity in AI-driven network decisions because they control the entire system. Enterprise customers buying a service cannot. Any spectrum intelligence API would need to come with performance guarantees, fallback mechanisms, and audit trails that the current generation of AI-RAN research has not been designed to provide. This is a product engineering problem as much as it is a research problem, and it is the kind of problem that product organisations, rather than research labs or standards bodies, are best positioned to solve.
A product roadmap, not a research agenda
I am not suggesting that someone should build a spectrum intelligence API tomorrow. The fundamentals are not in place, and launching prematurely would damage trust in an API ecosystem that is still earning enterprise credibility. What I am suggesting is that the telecom industry needs to start treating this as a product question rather than leaving it as a research curiosity or a standards problem.
Concretely, that means a few things. The CAMARA community should begin scoping what a spectrum-aware API family might look like, even at a conceptual level. This does not require solving the full AI-driven allocation problem — it could start with read-only APIs that give enterprise customers visibility into spectrum conditions affecting their private network, which is already a step beyond what SAS providers offer today.
The AI-RAN Alliance, for its part, should expand its lens beyond operator infrastructure and explicitly consider how its working group outputs (particularly in AI-for-RAN and the data initiative) might feed into enterprise-consumable services.
And SAS providers like Google and Federated Wireless, who sit at the intersection of spectrum management and cloud-native API design, should recognise that they are uniquely positioned to bridge this gap if they choose to invest in it.
The economic logic will follow the technical capability. As more enterprises operate private cellular networks, and as shared spectrum regimes expand globally, the demand for intelligent, programmable spectrum management will grow whether the supply side is ready or not. The operators and platform providers who build the bridge between AI-RAN capability and enterprise API consumption will own a product category that does not yet exist. Those who leave the two conversations in separate rooms will find that someone else builds the bridge for them.
What I would build
If I were advising an operator or a SAS provider today, I would start with the most constrained version of this problem: a spectrum condition monitoring API that provides real-time and predictive data about spectrum availability, interference risk, and capacity headroom at their deployment location. No AI-driven allocation decisions, no autonomous spectrum management, just visibility. It could be positioned as a subscription service which gives the enterprise independent assurance of their SLAs, and it would establish the enterprise trust and the data feedback loop that more ambitious AI-driven spectrum services will eventually require.
From there, you build incrementally toward intent-based spectrum management: the enterprise declares what it needs, the AI system delivers it, and the commercial model captures a share of the efficiency gain. Getting there will take years and will require coordination across regulators, standards bodies, operators, and enterprise customers. But the direction of travel is credible given what’s already taking place, and the first credible product in this space will have a significant advantage.
The AI-RAN research community is producing work that genuinely matters. The network API economy is creating the distribution mechanism for programmable telecom services. Someone needs to connect these two developments and turn them into something an enterprise customer can buy. That is a product problem, and it is one worth solving.
