Outlook: Crowdsourcing Operational Excellence
We constantly update the methodology of our network test in order to accommodate the technological development and to ensure that we can give a valid assessment of the quality, performance and stability of the tested networks. As an additional important step, we plan to extend our test schedule with a crowdsourced examination of operational excellence. At the moment, we have not quite arrived at this objective in Germany, Austria and Switzerland – but we can give a first outlook to where we are heading.
An additional important aspect of mobile service quality – above performance and measured values – is the actual availability of the mobile networks to their customers. Obviously, even the best performing network is only of limited benefit to its users, if it is frequently impaired by outages or disruptions. Therefore, P3 has been looking into additional methods for the quantitative determination of network availability, collecting data via crowdsourcing. This method must however not be confused with the drivetests and walktests described on the previous pages. We are convinced that crowdsourcing can significantly enhance the aspects of benchmarking in the future.
We obviously do not intend to replace our demanding drivetesting and walktesting with this approach. The well-proven gathering of measurement values has clear advantages, being conducted in a very controlled environment. Crowdsourcing accelerates this practice when looking for time periods or geography beyond the driven route.
Therefore, P3 has developed an app-based crowdsourcing mechanism in order to assess how a large number of mobile customers experience the availability of their mobile network. We call this aspect “operational excellence“. The detailed methodology is described in the box on the right-hand page.
Not yet part of the overall score in the 2017/2018 network test
Our clear objective for the near future is to include this “crowd score“ into the overall scoring of our network test. Operational excellence will then be an additional criteria, complementing the quality and performance of voice and data connections.
However, we follow equally high standards for the crowdsourcing results as for the other parts of our network test. This not least applies to the statistical relevance of our observations. Although we have been working on the necessary preparations for some time now, especially in Switzerland the number of participants has not yet reached the threshold that we have appointed. In contrast, in the network tests that we have recently conducted in Spain and the UK, the crowd score is already a part of the overall results.
On the other hand, we did not want to withhold the results of our first observation months (August, September and October 2017) in Germany, Austria and Switzerland. Therefore we have calculated the crowdsourcing results according to the evaluation scheme described on the right-hand page, but we did not include them into the results of this year‘s network test at hand. We, however, expect that this will be the case starting next year.
Steady in all three countries
As we have allotted ten achievable points per tested month, each contender could gather a total of 30 points. The overall crowd score thus represents the extent of relevant network degradations in the observed months.
In Germany, this only applied to Vodafone during the observation period. There, we identified a degradation on the morning of October 5th during a three-hour period. This led to the deduction of 1.3 points for this month. In August and September, we did not register any disruptions at Vodafone. For Telekom and O2, this applies to the whole three-month observation period.
A similar result can be seen in Austria: There, T-Mobile was affected by a degradation lasting up to two hours on the morning of August 18th. According to our evaluation scheme, this results in deducting 1.2 points.
In Switzerland, only Swisscom was affected. Here, we observed a degradation within a two-hour period on October 13th at 6 am. This also costs 1.2 points.
As the candidates scored within a very close range, the involvement of the crowd results would not have affected the overall ranking in Germany and Austria. Given the very close race in Switzerland, however, the loss of 1.2 points could have had unpleasant consequences for Swisscom. But as mentioned before, this result does not comply with the demanded statistical relevance.
Crowdsourcing Methodology
Even if the crowdsourcing results are not yet part of the overall scoring of the connect mobile network test this year, the underlying methodology is already exactly defined and ready for use.
For the crowdsourcing of operational excellence, P3 considers connectivity reports that are gathered by background diagnosis processes included in a number of popular smartphone apps. While the customer uses one of these apps, a diagnosis report is generated daily and is then evaluated per hour. As such reports only contain information about the current network availability, they generate just a small number of bytes per message and do not include any personal user data. Additionally, interested parties can deliberately take part in the data gathering by using the ”connect app“ (see box below on the left). In order to differentiate network glitches from normal variations in network coverage, we apply a precise definition of “service degradation“: A degradation is an event where data connectivity is impacted by a number of cases that significantly exceeds the expectation level. To judge whether an hour of interest is an hour with degraded service, the algorithm looks at a sliding window of 168 hours before the hour of interest. This ensures that we only consider actual network service degradations in contrast to a simple loss of network coverage of the respective smartphone due to prolonged indoor stays or similar reasons. Incidents that occur in the night hours between 0 am and 6 am are not considered.
In order to ensure the statistical relevance of this approach, a valid assessment month must fulfil clearly designated prerequisites: A valid assessment hour consists of a predefined number of samples per hour and per operator. The exact number depends on factors like the market size and the number of operators.
A valid assessment month must be comprised of at least 90 percent of valid assessment hours (again per month and per operator).
Sophisticated scoring model
The relevant KPIs are then based on the number of days when degradations occurred as well as the total count of hours affected by service degradations. In the scoring model that we plan to apply to the gathered crowdsourcing data, 60 per cent of the available points will reflect the number of days affected by service degradations – thus representing the larger-scale network availability. An additional 40 per cent of the total score is derived from the total count of hours affected by degradations, thus representing a finer-grained measurement of operational excellence.
Each considered month is then represented by a maximum of ten achievable points. The maximum of six points (60 per cent) for the number of affected days is diminished by one point for each day affected by a service degradation. One affected day will cost one point and so on until six affected days out of a month will reduce this part of a score to zero.
The remaining four points are awarded based on the total number of hours affected by degradations. Here, we deduct 0.1 points per hour affected by a network degradation. So, a period of up to two hours, costs 0.2 points, of up to three hours 0.3 points and so on.
Participate in our crowdsourcing
The connect app not only allows you to take part in our crowdsourcing. Above that, you receive latest telecommunications news and you can also check the speed of your network with an informative speed test. The Android version additionally reveals interesting details like the data consumption and usage time per app.
Only if you agree, the app will also perform completely anonymous connection tests in the background. The required data volume for these tests is less than 2 MB per month.
iOS users find the connect app here,
Android users find the connect app here.