Hadoop is Transforming Telecommunications

Use Apache Hadoop to Improve Service & Launch New Products

Some telecommunication providers process millions of phone calls per second. Now add in: services for web browsing, videos, television, streaming music and movies, text messages and email. That all adds up to a rate of data growth that can be very expensive to store and process. Yet consumers expect to save everything that they ever receive, in case they want to watch it, click it or read it later.

This rapid (and irreversible) pace of telecommunications data growth requires more efficient, scalable storage. So telcos are using Apache Hadoop to turn their burgeoning storage liabilities into strategic information assets.

Rogers sees a 360-degree of its customers and used it to launch the fastest growing digital product in the company’s history

Neustar provides cloud-based information and analytical services to its telecommunications clients. It saved millions per year by offloading cold data into HDP.

Open Enterprise Hadoop for Telecommunications

Our other telco clients have identified their own Hadoop use cases, but there are similar patterns in the Hadoop data architectures that they all build. Those data architectures allow telcos to store new types of data, retain that data longer, and join diverse datasets together to derive new insight.

The following reference architecture diagram represents an amalgam of those approaches that we see across our telco clients.

Hadoop for Telcos

With their Hadoop modern data architectures, telecommunications companies of all sorts can execute the following six use cases (and many more).

Analyze Call Detail Records (CDRs)

Telcos perform forensics on dropped calls and poor sound quality, but call detail records flow in at a rate of millions per second. This high volume makes pattern recognition and root cause analysis difficult, and often those need to happen in real-time, with a customer waiting for answers. Delay causes attrition and harms servicing margins.

Apache Flume can ingest millions of CDRs per second into Hadoop, while Apache Storm processes those in real-time and identifies any troubling patterns. HDP facilitates long-term data retention for root cause analysis, even years after the first issue. This CDR analysis can be used to continuously improve call quality, customer satisfaction and servicing margins.

Service Equipment Proactively

Transmission towers and their related connections form the spinal chord of a telecommunications network. Failure of a transmission tower can cause service degradation. Replacement of equipment is usually more expensive than repair. There exists and optimal schedule for maintenance: not too early, nor too late.

Apache Hadoop stores unstructured, streaming, sensor data from the network. Telcos can derive optimal maintenance schedules by comparing real-time information with historical data. Machine learning algorithms can reduce both maintenance costs and service disruptions by fixing equipment before it breaks.

Rationalize Infrastructure Investments

Telecom marketing and capacity planning are correlated. Consumption of bandwidth and services can be out of sync with plans for new towers and transmission lines. This mismatch between infrastructure investments and the actual return on investment puts revenue at risk.

Network log data helps telcos understand service consumption in a particular state, county or neighborhood. They can then analyze network loads more intelligently (with data stretching over longer periods of time) and plan infrastructure investments with more precision and confidence.

Recommend Next Product to Buy (NPTB)

Telecom product portfolios are complex. Many cross-sell opportunities exist for the installed customer base, and sales associates use in-person or phone conversations to guess about NPTB recommendations, with little data to support their recommendations.

HDP gives a telco the ability to make confident NPTB recommendations, based on data from all of its customers. Confident NPTB recommendations empower sales associates (or self service) and improve customer interactions. A Hadoop data lake reduces sales friction and creates NPTB competitive advantage similar to Amazon’s advantage in eCommerce.

Allocate Bandwidth in Real-time

Certain applications hog bandwidth and can reduce service quality for others accessing the network. Network administrators cannot foresee the launch of new hyper-popular apps that cause spikes in bandwidth consumption and then slow performance. Operators must respond to bandwidth spikes quickly, to reallocate resources and maintain SLAs.

Streaming data in Hadoop helps network operators visualize spikes in call center data and nimbly throttle bandwidth. Text-based sentiment analysis on call center notes can also help understand how these spikes impact customer experience. This insight helps maintain service quality and customer satisfaction, and also informs strategic planning to build smarter networks.

Develop New Products

Mobile devices produce huge amounts of data about how, why, when and where they are used. This data is extremely valuable for product managers, but its volume and variety make it difficult to ingest, store and analyze at scale. Not all data is stored for conversion into business insight. Even the data that is stored may not be retained for its entire useful life.

Apache Hadoop can put rich product-use data in the hands of product managers, which speeds product innovation. It can capture product insight specific to local geos and customer segments. Immediate big data feedback on product launches allows PMs to rescue failures and maximize blockbusters.

Get the Whitepaper

Neustar used to capture 1% of its network data and retain it for 65 days. They now capture 100% and retain it for two years. See how they use Hadoop.
Hortonworks Data Platform
The Hortonworks Data Platform is a 100% open source distribution of Apache Hadoop that is truly enterprise grade having been built, tested and hardened with enterprise rigor.
Get started with Sandbox
Hortonworks Sandbox is a self-contained virtual machine with Apache Hadoop pre-configured alongside a set of hands-on, step-by-step Hadoop tutorials.
Modern Data Architecture
Tackle the challenges of big data. Hadoop integrates with existing EDW, RDBMS and MPP systems to deliver lower cost, higher capacity infrastructure.