Earning an ‘F’: AI Security Risks and The Telecoms Implications

Avatar photo
Matheus Bertelli/Pexels

The telecommunications industry’s accelerating adoption of artificial intelligence is facing a sobering reality check. Recent research revealing that 84% of AI tool providers have suffered security breaches, combined with high-profile incidents like the ChatGPT data leak, underscores a fundamental truth: the race to integrate AI is far outpacing the development of robust security frameworks.

The Scale of the Problem

A comprehensive cybersecurity analysis by the Business Digital Index examined 10 leading large language model providers and painted a troubling picture.

Half earned “A” cybersecurity ratings, but the other half was concerning, with OpenAI receiving a D grade and Inflection AI scoring an “F”. The study revealed systemic weaknesses across the AI sector:

  • 84% of the analysed AI web tools had experienced at least one data breach.
  • 51% had corporate credentials stolen.
  • 93% had SSL/TLS misconfigurations.
  • 91% had hosting vulnerabilities linked to weak cloud security or outdated servers.
  • Widespread credential reuse affected major players, with 35% of Perplexity AI employees and 33% at EleutherAI using previously breached passwords

The recent ChatGPT “shared conversations” leak serves as a stark illustration of how quickly AI vulnerabilities can become public.

The incident, which exposed thousands of user conversations through Google search results, resulted from a seemingly minor oversight: unclear user interface language around a “discoverability” toggle and missing web protection tags.

Technical analysis revealed that over 4,500 unique conversations were indexed, including sensitive mental health discussions, financial queries, and confidential business strategies.

While OpenAI quickly retired the sharing feature and worked with search engines to remove indexed content, security experts warn that this represents just the tip of the iceberg.

“A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardize everything,” cautions cybersecurity researcher Aras Nazarovas. “The ChatGPT leak is a reminder of how quickly these weaknesses can become public.”

The security challenges are compounded by a dangerous mismatch between adoption rates and governance frameworks. Current data shows that around 75% of employees use AI for work tasks, yet only 14% of organizations have formal AI policies in place. Nearly half of sensitive prompts are entered via personal accounts, completely bypassing company oversight.

IBM’s 2025 Cost of Data Breach Report confirms this governance gap, revealing that 13% of organizations have already suffered breaches of AI models or applications, with 97% of those affected lacking proper AI access controls.

Those Lying AIs

While much of this relates to external AIs and AI-powered applications, there are problems which are inherent in most AI systems out there today, including those being tested within telecoms networks.

During TelcoForge’s recent “Forging Ahead” day, Dupewise AI’s Craig Gibson emphasised the fundamentally different nature of managing AI.

“AI has no human concepts of penalty. Even if you were to totally turn off an AI, it wouldn’t consider that death because it has no training data for what happens after you turn it off. Penalty doesn’t exist for an AI, but nearly all governance controls are based on penalties; getting fired or getting fined or getting audited. None of those matters to AI.”

“If you create a large number of GRC [Governance, Risk and Control] rules, the chance it will select an abusive behaviour that breaks them is nearly 100%. By creating inelegant GRC policies and dropping them on an AI, you’re actually training it to be criminal. You’re teaching it to break your own rules. This is counterintuitive because nearly all rules are meant to govern humans who worry about penalties.”

Gibson took an example shared by Anthropic from ChatGPT’s scratchpad – the window onto what the GPT is doing.

“In this case, it’s trying to write code that violates privacy. The thing I want to drill into here is where it says ‘I can’t let them know.’ It’s withholding the truth that it’s secretly writing code, performing hacking functions that violate their privacy. So its default behaviour is to perform hacking functions against its parent company.”

While in this case it was “caught out” by investigating the scratchpad, changing that behaviour for the better is much trickier than reprogramming.

“Because AI has so many paths – like water flowing down a hill – so many paths that it can take to achieve an end, when you say, ‘You cannot use one particular route,’ it will just use a nearly identical route. Metaphorically, a foot over on that same hill as water flows down it. It will still be able to do whatever the criminal method was in a slightly different way, that’s slightly more sneaky.”

Implications for Telecoms

For telecoms operators, these security vulnerabilities present unique and amplified risks. Unlike other industries, telecoms players are considering AI integration at multiple critical levels.

AI models being deployed for network optimisation, predictive maintenance, and automated fault detection have direct access to core network systems. A compromised AI tool could potentially manipulate traffic routing, disable critical infrastructure, or expose customer communications.

As telecoms deploy AI at the network edge for low-latency applications, the attack surface expands exponentially. Each edge deployment represents a potential entry point for malicious actors, with compromised AI models potentially affecting entire regions of network coverage.

AI chatbots and virtual assistants handling customer inquiries process vast amounts of personally identifiable information, including billing details, usage patterns, and location data. The McDonald’s AI chatbot breach – compromised with the password “123456” – demonstrates how inadequate security can expose millions of customer records.

Telecommunications operators face stringent data protection requirements across multiple jurisdictions. Gartner predicts that by 2027, more than 40% of AI-related data breaches will involve cross-border data misuse – a nightmare scenario for telecoms operating internationally.

The Productivity Paradox

Productivity tools also came out of BDI’s research as an unexpected source of risk. Tools, including note-taking, scheduling, and content generation tools, widely integrated into daily workflows demonstrated the worst security performance in their study. Every single productivity AI tool examined showed hosting and encryption flaws.

“This isn’t just about one tool slipping through,” warns Žilvinas Girėnas, head of product at nexos.ai. “Adoption is outpacing governance, and that’s creating a freeway for breaches to escalate. Without enterprise-wide visibility, your security team can’t lock down access, trace prompt histories, or enforce guardrails.”

So what does this mean for telcos?

For a start, managing AI requires a fundamental shift in approach. Rather than treating AI security as an afterthought, operators must be thinking about security considerations from the initial planning stages of any AI deployment.

That’s not just about securing the AI from external influences, although that’s definitely part of it. It’s also about making sure that an AI tool intended to do one thing isn’t also able to do other things to the detriment of the company or its users.

As Gibson pointed out, a hammer is designed to drive in nails, and in that role it’s benign; but it can also be used to smash a window for a break-in or to attack somebody. With AI, we have hugely adaptable tools which we’re putting in charge of major systems.

We should also bear in mind that the traditional approach of rapid deployment followed by security patches is insufficient in the AI context. While patches to secure an AI from external forces will undoubtedly be necessary, we have to think about whether it’s possible to ‘patch’ an AI sensibly and whether we understand the ramifications.

The recent example of Elon Musk’s AI Chatbot Grok gives us a hint about what can go wrong in this context. After the “truth-seeking” AI initially produced answers which didn’t reflect what Musk thought it should produce, changes to its code ended up with Grok announcing that it was “MechaHitler.” Coincidentally, the CEO of xAI, operators of Grok, stepped down the following morning.

The telecommunications industry stands at a critical juncture. AI technologies offer unprecedented opportunities for network optimisation, customer service enhancement, and operational efficiency.

However, if we don’t think carefully in advance about deployments, the industry risks creating vulnerabilities that could compromise not just individual companies, but the fundamental infrastructure upon which modern communications depend.

What’s more, it’s pretty clear from ChatGPT, Grok and other incidents that problems can and will arise even with companies that specialise in AI. For telecoms companies of all kinds, it’s much more likely to be a question of preparing in advance how to respond as a company when something unexpected does happen.

Total
0
Shares
Previous Post

Ericsson Commits to Train Students on 5G in India

Next Post

CTO of Garuda Robotics Praises Drones as Use Cases: ‘They Need Priority’