Telco AI Shift: SoftBank’s LTM Tops GSMA Ranking
SoftBank has just achieved a top-tier ranking in the GSMA Open-Telco LLM Benchmarks with its Large Telecom Model (LTM). On paper, that sounds like another AI milestone. In reality, it’s something more specific and more important for the telecom industry.
Because this isn’t about building a better chatbot. It’s about building an AI that actually understands how telecom networks work.
And that’s where most general-purpose AI still falls short.
Why telecom AI is a different game
Telecom is not a “text-only” problem. It’s a systems problem.
You’re dealing with network configurations, signaling protocols, performance logs, fault management, radio parameters, billing logic, and real-time operational decisions. The data isn’t clean. It’s messy, structured, semi-structured, and often deeply technical.
A generic large language model can summarize a document. But can it interpret a network log anomaly or understand a configuration conflict in a 5G core?
That’s the gap SoftBank is trying to close with LTM.
As Louis Powell, Director of AI initiatives at GSMA, put it:
“Telecom networks demand precision and context that general-purpose AI often struggles to deliver. By testing models against telecom-relevant datasets and tasks, the GSMA Open-Telco LLM Benchmarks spotlight genuine capability improvements. SoftBank’s top-tier ranking is a strong example of that progress, and exactly the kind of momentum the industry needs as it scales AI responsibly into operations.”
Building a telecom-native AI model
What’s interesting is not just the result, but how SoftBank got there.
Instead of relying on a general-purpose LLM and tweaking it slightly, they built a telecom-specific learning framework from the ground up. That includes both public telecom datasets and proprietary operator data such as network operations, design knowledge, and internal workflows.
The training process itself is layered and deliberate.
What the framework actually does
SoftBank combines multiple training approaches in stages:
- Continual pre-training to adapt base models to telecom data
- Fine-tuning for specific telecom tasks
- Reinforcement learning to improve real-world performance
But the real differentiator is how they handle data.
Telecom data is not just documents. It includes tables, configurations, and code-like structures. SoftBank reorganizes all of this into synthetic datasets optimized for different training phases.
On top of that, they use LLM-based filtering to clean the data and small language models to optimize training parameters. This is not just about scale. It’s about precision.
The result is a system that can continuously improve as new data comes in, new models emerge, and business requirements evolve.
Why the GSMA benchmark actually matters
Benchmarks are often ignored because they don’t always reflect real-world performance. But the GSMA Open-Telco LLM Benchmarks are different.
They’re designed specifically for telecom use cases.
That includes:
- Understanding telecom specifications
- Answering domain-specific questions
- Interpreting operational logs
- Handling mathematical reasoning in telecom contexts
- Generating configuration descriptions
This is closer to what operators actually need.
The benchmarks are part of the GSMA’s Open Telco AI initiative, launched at MWC Barcelona 2026. The goal is to create a shared ecosystem for telecom-grade AI, including models, datasets, and tools.
In other words, this is the industry trying to move from generic AI hype to something deployable.
And SoftBank’s ranking suggests they’re ahead in that transition.
From experimentation to real operations
SoftBank is not positioning LTM as a research project. The focus is very clear: internal deployment.
They want to reduce reliance on individual expertise in network operations, which is a real issue across the industry. Telecom networks are still heavily dependent on experienced engineers who understand complex systems built over decades.
LTM is meant to:
- Assist in network troubleshooting
- Interpret logs and configurations
- Support operational decision-making
- Improve efficiency across network management
Ryuji Wakikawa, Vice President, Head of the Research Institute of Advanced Technology at SoftBank, summed it up:
“SoftBank has been developing a telecom industry–specific LTM, trained on telecom expertise and real operational data, and achieved top-class results in the GSMA Open-Telco LLM Benchmarks. This demonstrates that our training foundation is also at a high international standard. By leveraging LTM, SoftBank will thoroughly enhance its operations and lead the advancement of the telecommunications industry.”
Where this fits in the broader market
SoftBank is not alone in this direction.
Across the industry, operators and vendors are moving toward domain-specific AI models.
Nokia has been investing heavily in AI for network automation through its Autonomous Networks initiative. Ericsson is integrating AI into its network management platforms, especially around intent-based operations. Vendors like Huawei have long been pushing AI-driven network optimization.
At the same time, companies like Google Cloud and Microsoft Azure are offering telecom-specific AI solutions, often built on top of general-purpose models but adapted with telecom data layers.
What’s different about SoftBank is the operator-first approach.
They’re not just consuming AI. They’re building it based on real operational data and using it internally. That gives them a potential advantage in accuracy and applicability.
And it aligns with a broader trend.
According to industry estimates from GSMA Intelligence and analysts like Juniper Research, AI-driven automation is expected to play a critical role in reducing operational costs and improving network performance as 5G and future networks become more complex.
The bigger shift: from generic AI to vertical AI
What we’re seeing here is part of a bigger shift.
The first wave of AI was horizontal. One model, many use cases.
The next wave is vertical. Models built for specific industries, trained on domain-specific data, and optimized for real workflows.
Telecom is a perfect example of why this shift is necessary.
You can’t run a network on generic intelligence. You need context, precision, and reliability.
SoftBank’s LTM is an early signal of what that future looks like.
Conclusion
This is not just a benchmark story. It’s a signal that telecom AI is entering a new phase.
SoftBank’s approach shows that real progress in this space will not come from scaling generic models alone. It will come from combining domain expertise, proprietary data, and structured training frameworks.
Compared to players like Ericsson or Nokia, which are embedding AI into existing platforms, SoftBank is taking a more foundational route by building a telecom-native model from scratch. Compared to hyperscalers like Google or Microsoft, they have something those companies lack: deep, real-world operational data from running a network.
That combination matters.
Because as networks become more complex and expectations around reliability increase, AI will not just be a support tool. It will become part of the operational core.
And the winners in this space will not be the ones with the biggest models. They will be the ones with the most relevant data and the clearest understanding of the systems they are trying to optimize.
SoftBank just showed what that looks like in practice.
