In our previous post about machine learning (ML), we introduced the topic of artificial intelligence (AI) and hinted at the typical deep learning pipeline that is needed to apply ML on data collected with mobile network testing solutions from Rohde & Schwarz. In this post, we will demonstrate how we can leverage ML to extract more value out of the data. For this, we will use the practical use case of measuring the call drop ratio.

The call drop ratio (CDR) is one of the most important key performance indicators (KPI) that operators monitor constantly. CDR plays an important role in how subscribers perceive network quality.

To measure CDR, operators conduct a drive testing campaign over a specific area, making multiple calls between mobile devices (smartphones) running QualiPoc tests. Considering that very few calls finish with a drop call state, the number of calls would need to be in the range of several thousand to measure CDR with statistical significance.

The need for a very high number of call samples makes CDR measurements unrealistic for smaller areas, for example, shopping malls. Consequently, we end up obtaining one single CDR value per drive testing campaign.

Machine learning use case

Let me explain the difficulties of CDR measurement with a very illustrative analogy. Imagine you are coaching pole vault athletes. Imagine that you are preparing them for the Olympics and want to test whether they can pass a certain height. Will you make them jump a thousand times and measure their success rate?

Obviously not. Instead, you would pay close attention to a small number of jumps and extract meaningful information from these. For example, the margin when they succeed; the state of the bar after they passed; whether they only grazed the bar or tipped it over when they failed, etc.

If we want to apply the same principles to call drop ratio (CDR) calculations, we will need a way to score each call based on how close it was to dropping, regardless of its result. This score would give us an indication of the stability of the call by measuring how close the call is to those calls that end up dropping.

The Call Stability Score

The Call Stability Score (CSS) is our patent-pending technology. It rates each call for its stability based on high-dimensional datasets covering hundred thousands of good and bad calls.

CSS depends on the evolution in time of multiple features (i.e., a drop call is likely to be caused by some event that happened a few seconds ago in the past). Therefore, the input to the function that calculates CSS is very high dimensional as it contains the time series of multiple KPIs during a defined period. Fortunately, there are ML model architectures specifically designed to deal with sequences of features such as recurrent neural networks based on LSTM (Long Short-Term Memory) cells.

Machine learning use case

The figure shows a real plot of different call tests after being scored by our ML model. We provided a balanced set of drop (red dots) and non-drop (green dots) calls. The input feature space had been reduced to two dimensions using PCA (Principal Component Analysis) for visualization purposes.

After a lengthy training process, our ML model finds the boundary that separates drop from non-drop calls with a minimum margin of error. The boundary is a line in 2D as can be inferred from the figure, a plane in a similar 3D plot and a hyperplane in the number of dimensions of the real feature space. The hyperplane is the basis of the score that is obtained by measuring the Euclidean distance of a particular call to the learned hyperplane and scaling it to get a number between 0 and 1.

Operator benefits

The Call Stability Score (CSS) offers operators many benefits. One is the removal of one of the main chance factors in benchmarking campaigns. As the drop call events are very rare in today’s networks, a single extra drop call could cause an operator to lose or win the CDR comparison.

The CSS, however, transforms the extreme binary of drop or not drop states into a continuous score that rates the probability of a loss. An averaged CSS gives a very solid indication about the risk of drop calls with many fewer calls to make compared to traditional CDR. Also, the comparison is much fairer as it considers all calls, not only the very few that dropped.

The CSS enables us to guide users to potential risks and to point out optimization efforts in areas with low scores, even if there were no drop calls during the actual tests. Hence, we can anticipate issues that are usually undetected after a drive testing campaign.

If you want further information about these topics, please send an e-mail to mnt-ai@rohde-schwarz.com

Read part 1 of this series and get an idea of the role of machine learning in the telecom industry: