Guest Post: Regulating Telecoms in the AI Era: Data Rights, Privacy, and Fair Access

Avatar photo

By Harikrishna Kundariya, Director, eSparkBiz

With the advent of AI taking over different sectors globally, telecoms won’t be left behind. In the last few years, AI has – depending on who you talk to – ether completely transformed the telecom industry or completely transformed its marketing terminology.

The market size of AI is expected to reach $58 billion by 2032 (up from nearly $3 billion in 2024).

With the successful deployment of AI, companies can drive business growth and renewal. That said, telecom companies cannot deploy AI without focusing on responsible AI, the successful practice of deploying AI in telecom with transparency and compliance with ethics and regulations. Let’s delve deeper into the topic to see the best practices for telecom AI deployment.

AI is Reshaping The Telecom Sector in Different Ways

Most telecom companies already deploy machine learning and prediction algorithms to analyse real-time data to manage traffic and predict failures. The main applications of network optimisation include traffic routing, resource allocation, and predictive maintenance. However, it can also be used for things like optimising data flow from different types of IoT devices. All of this is involved in reducing operational costs and, whether slapped with an AI tag or not, is fairly familiar territory

However, in the area of cybersecurity we’re into much more uncertain territory.

With the deployment of AI, you can secure telecom data in real time. Different kinds of AI/ML can help with fraud prevention and real-time threat detection, safeguarding sensitive data against cyber-attacks. A whole new set of services are becoming available from traditional security vendors, new providers and system integrators.

These adopt Privacy Enhancing Technologies (PETs), using advanced encryption, anonymisation, and privacy policies to safeguard user data. Ideally, PETs can protect personal data by safeguarding data during the process, transmission & storage. In addition to encryption, PETs offer features such as synthetic data generation, confidential computing, and complete privacy. As a telecom service provider, you can maintain control over data and reduce privacy risk in this AI-driven data-centric world.

However, there are different avenues opening up to make data insecure. Whether this is from leaking personally identifiable customer information or exposing executives’ conversations with ChatGPT on business decisions, there are some very real risks which need addressing.

Regulatory balance in emerging AI and data privacy

Regulation in AI is still very much an emerging field. However, we can start to see it taking shape globally.

In July the UK’s OFCOM, often a leading indicator, released its strategic approach to AI, which was more or less to suggest that it didn’t have one; the principles that govern its activities are technology-neutral. They are regulating outcomes rather than the means to get there.

However, the European Union’s AI Act has provided some more specific principles around how to minimise the risks of using AI. More specifically, as outlined in this analysis, the degree of regulation is proportional to the level of risk inherent in the application being examined. The likelihood is, however, that AI applications involving networks (as critical national infrastructure) and customers’ personal data will fall into the high-risk categories.

What is more, telcos are in a unique position with regard to the fact that they carry data for almost everybody else. While encryption limits what can be known about that data, it means that there is the potential for telco datasets to be a source of training data for both internal and external AIs. In this case there is not necessarily any one application that can be regulated for, so there are areas of uncertainty that need tightening up.

AI Compliance and Sandboxes

Alongside regulation and the application of wider legal principles comes the question of compliance. AI compliance audits are becoming important not only in the telecom sector but in almost all sectors that have adopted AI, to ensure that deployed AI models adhere to legal and ethical standards.

Essentially, an AI compliance audit can help your telecom company to operate within relevant industry standards, minimising reputational risks. While these can be performed at any time, it’s wise to consider one proactively before any proposed AI application scales up in a live environment.

Telecom companies can create a controlled environment for testing using regulatory sandboxes, frameworks for testing AI systems in controlled environments, which help to ensure not just compliance but test out how the AI works in practice before market entry.

For example, you can monitor the implications of the AI application in practice on security and privacy – but importantly to identify any bias in the algorithm.

Algorithmic Fairness

Bias in algorithms is a well-understood phenomenon how, thanks in part to studies a few years old which highlighted failures in facial recognition systems when applied to people with different racial characteristics; Amnesty International highlights a few examples.

In itself, this is not necessarily disastrous. There is a growing – though far from widespread – awareness that AI can make mistakes in a way which deterministic systems cannot. However, it is in the interests of most companies leveraging AI to downplay these mistakes, and in other situations an AI application might be an “invisible” part of a process. As a result, human decision-making that depends on these algorithms can be led badly astray.  

What’s more, in fraud detection, AI tools can flag customer segments based on biased levels of perceived risk, leading to unfair scrutiny. According to recent statistics, 52% of telecommunication organisations are likely to use chatbots for improving their efficiency.

Recently, Vodafone deployed an AI-based solution to detect spam messages and calls. To date, the company has flagged millions of spam messages. In addition to spam detection, the AI-based system can detect fraudulent links, unauthorised promotion, and theft attempts. Its predictive AI system learns from incoming data patterns, which can enhance its detection capabilities over time.

While nobody has questioned the fairness of Vodafone’s algorithm so far, monitoring a system like this for bias as it learns – whether that’s to be too harsh in some instances or too lenient in others – needs to be an ongoing process. This will change the nature of compliance from a one-and-done checklist to a systemic, ongoing activity; only one of many emerging issues over time.

About the Author

Harikrishna Kundariya, is a marketer, developer, IoT, Cloud & AWS savvy, co-founder, and Director of eSparkBiz a Software Development Company. His 15+ years of experience enables him to provide digital solutions to new start-ups based on IoT and SaaS applications.

Image by Brian Penny from Pixabay

Total
0
Shares
Previous Post

Inside Shanghai’s Massive 5G-Enabled Metro Network