Please enable javascript in your browser to view this site

Mobile coverage reporting and requirements

While consumer experiences of mobile coverage often fail to live up to what is promised, regulators have been slow to adopt the most effective testing and reporting practices. They should prioritise the most accurate testing methods despite their additional costs, with Arcep’s approach in France one to follow.

  • Regulators tend to primarily rely on operators’ signal strength projections that can be easily estimated at scale, as opposed to third-party or crowdsourced data. Arcep in France represents the gold standard in coverage reporting, with its unique, three-pronged testing approach providing both accurate and useful results for end users.

  • Many regulators fall short of imposing a full suite of obligations, including testing along transport routes and at peak times. The varying levels of granularity required also lead to differing conclusions about the quality and coverage of mobile networks across urban and rural areas.

  • While most regulators publish coverage maps, there is considerable variation in the frequency at which they are updated and the technologies presented. Ofcom is the only regulator studied not to differentiate between 4G and 5G, while the EC plans to take a pioneering approach by requiring reporting on the availability of FWA services.

  • Ofcom’s approach to mobile coverage reporting in the UK has so far failed to closely reflect the consumer experience, with third-party testing initiatives highlighting its shortcomings. The regulator’s revised coverage tool seems more of a stopgap, than a long-term solution that aligns reporting with the lived experiences of end users.

  • Regulators should prioritise accuracy in coverage reporting obligations despite the additional cost of deploying more real-world testing. Greater accuracy would improve transparency for consumers on network availability and performance, better informing purchasing decisions while promoting competition between mobile operators.

Mobile coverage reporting is largely underpinned by operator-supplied signal strength projections

International approaches to mobile coverage reporting and the associated requirements for operators vary significantly. In most cases, operators are responsible for testing the availability of mobile services via signal strength projections (see Table 1), which are typically derived from a computer programme that simulates how mobile signals will pass through masts and how they will be impacted by obstructions, including buildings, hills and trees. Regulators (e.g. in France, the UK and the US) typically require such tests to be conducted on a recurring basis to keep results consistent and up to date. Meanwhile, the Swedish Post and Telecom Authority (PTS) takes a unique approach, requiring operators to respond to an annual survey detailing their projections at a given moment in time. Although the Australian Government’s three-year mobile coverage audit is not due to be completed until June 2027, the sectoral regulator, the Australian Communications and Media Authority (ACMA), recently announced the Telecommunications (Mobile Network Coverage Map) Industry Standard 2026, which will require operators to conduct tests based on predictive modelling every three months.

Signal strength projections are sometimes paired with crowdsourcing data, which is submitted to regulators, usually via a dedicated app. Previously, the Australian Government had adopted an alternative approach, relying on third-party testing alongside crowdsourcing rather than primarily on operator projections. However, ACMA’s new rules will require operators to publicly report like-for-like information on coverage based on projected signal strengths from 30 June 2026. The regulator has also stated that it will explore whether operator-supplied data can be enhanced by alternative sources of information, such as crowdsourcing and field testing. Telstra has criticised ACMA’s new rules, arguing that consumers may be confused by operators labelling certain areas as having “no coverage” when terrestrial signal strengths are reported below -115 dBm, as many consumers may still be able to use mobile services in these areas. The criticism follows reports that the rules will likely force Telstra to strip back its current coverage claims by approximately 1m km² – something its competitors have already accused it of overstating.

In France, Arcep’s approach stands out through its own real-world testing, which serves as the primary input to its mobile coverage checker, “Mon réseau Mobile”. The regulator conducts field tests, which it integrates with both operator projections and crowdsourcing data. Arcep’s unique, three-pronged testing methodology provides its checker with more accurate results than any of the other regulators studied. In March 2026, Arcep announced it had partnered with network performance testing firm, Ookla, from whom it has added over 400,000 new, independent speed tests to its mobile coverage checker, further strengthening its reliability and robustness.

Many regulators are yet take a comprehensive approach to testing based on granularity, transport routes and peak times

While testing obligations imposed by regulators tend to centre on operator data, differences arise in the granularity, locations and timings of those testing activities. On granularity, although both Ofcom in the UK and the EC in the EU utilise the same method for operator projections across a pixelated grid, the EC’s approach is more accurate due to its 20mx20m grid square sizes as opposed to Ofcom’s 50mx50m range – see Table 2. In the US, the Federal Communications Commission (FCC) requires a “100m resolution” of results, while Arcep requires at least 10 measurements be taken per km².

These different approaches come with both strengths and weaknesses. Through Ofcom’s checker, crowdsourced test results from identical pixelated grid squares may fail to represent consumer experiences in more rural, less populated areas, where fewer tests take place. Arcep’s methodology, of taking a larger area but ensuring a certain minimum amount of tests are taken, will likely better represent consumer experiences in these more rural areas. Arcep’s coverage checker also allows end users to see the range of testing results in any given area, increasing transparency around why each area is rated positively or negatively. However, Ofcom’s and the EC’s proposed granularity may provide more accurate results in more densely populated urban areas, where more crowdsourced testing is likely to occur.

Testing locations vary too, with most regulators requiring operators to report some form of road or rail coverage – although Arcep and the PTS go the furthest in this regard. In February 2026, the PTS proposed a public funding programme for road and rail connectivity to support operators in meeting the Swedish Government’s requirements for minimum download speeds of 50Mbps on all public roads and railways by 2030. Arcep similarly provides specific information on rail coverage, even adjusting its public information based on certain rail routes.

Requiring a consideration for peak times is uncommon among coverage reporting rules, with the EC the only regulator considering such an obligation. The EC’s proposal would require operators to project peak time cell loads in urban, suburban and rural areas. The more specific consideration for peak times would improve accuracy and provide consumers with specific, relevant information. However, this approach still lacks accuracy given its self-admitted focus on comparability, with the EC keen to enable straightforward assessments of Member States side by side – something that it considers would be easier with signal strength projections rather than real-world testing. Arcep considers peak times by enabling the public to report real-time coverage discrepancies, which are immediately integrated into its coverage maps. This live reporting mechanism would likely be particularly useful for informing end users in the event of network outages.

While most regulators publish coverage maps, some go further than others in enabling end users to see the range of technologies available to them

Regulators in many jurisdictions continue to report on a variety of available technologies, often going further than measuring just mobile broadband services or differentiating between 4G and 5G – see Table 3. Most interestingly, the EC’s planned approach would require operators to submit projections for the coverage and quality of fixed wireless access (FWA) services. However, this proposal has raised some concern from industry stakeholders as the EC also plans to only utilise outdoor coverage projections, which may not adequately represent the experiences of consumers using FWA services indoors as an alternative to traditional fixed broadband, which is the most common use of the technology.

Most regulators publish interactive coverage maps for consumers to access and easily check coverage in their area. Australia will be a notable exception to the rule following ACMA’s publication of the Telecommunications (Mobile Network Coverage Map) Industry Standard 2026, which will require operators to publish coverage maps from 30 June 2026, as opposed to the regulator itself publishing a centralised map showing all results. A similar approach has been taken by the Telecom Regulatory Authority of India (TRAI), which also requires operators to publish their own maps, with the objective of encouraging further network deployment and competition in unserved or underserved areas.

The frequency that regulators publish these maps differs too, limiting the reliability of some maps that use outdated results. The UK and the US use tri-monthly and continuous updates, respectively, while others, such as Arcep’s, are conducted over the course of a year. The Australian Government’s current approach is notable too due to its ongoing audit of mobile coverage, which is updated regularly, but will only last until mid-2027. These publications tend to align, or at least partially align, with the testing periods used by regulators. For example, Arcep’s recurring tests over the course of one year are published annually, although its live information on outages and unavailable masts is updated on a continuous basis.

Ofcom’s approach to mobile coverage reporting in the UK has so far failed to closely reflect the lived experiences of many consumers

In the UK, Ofcom’s original mobile coverage checker had tended to show more positive coverage projections than many consumers’ real-world experiences, prompting the Government to urge the regulator to better enable consumers to determine which network best serves their needs. In a letter to Ofcom, Sir Chris Bryant (former Secretary of State, DSIT) stressed that he “cannot overstate how important” he believed the issue to be, adding that more accurate mobile coverage data on a local, regional and national level would enable consumers and businesses to make informed choices about which operator to choose, while also helping to inform government policy decisions.

Ofcom’s mapping under its old approach (left) indicated significantly better coverage in the Wolverhampton area than the mapping from third-party network testing firm Streetwave (right), which identified a number of zones as having poor coverage – see Figure 1. Although operators tended to agree that Ofcom’s coverage checker was not completely representative of real-world experiences, they have also argued that Streetwave’s measurements are inaccurate and have criticised its testing methods, such as drive testing coverage via bin lorries. Ofcom’s revised coverage map, “Map Your Mobile” (middle), published in June 2025, aligns more closely with Streetwave’s analysis, signalling a more accurate representation of consumer experiences. However, there is still a noticeable difference in the quality of coverage being reported by Ofcom and Streetwave, showing that although an improvement, the regulator’s updated checker still fails to pick up on, or at least highlight as clearly, areas of poor coverage.

This difference is likely to be caused by Ofcom and Streetwave’s different testing methods. Streetwave relies solely on real-world testing, which is more representative of end user experiences than Ofcom’s reliance on signal strength projections complemented by crowdsourced data. Ofcom’s previous requirement that operators submit their projections in 100mx100m pixelated zones, a scale that arguably lacked granularity, may have further impacted its old checker’s accuracy. In the updated checker, this range has been reduced to 50mx50m, which should lead to slightly more accurate projections being submitted. However, this change has also been criticised, with some stakeholders arguing for an even more detailed 25mx25m range, moving closer to what has been proposed in the EU. Even then, these grid-based projections struggle to fully take into account the country’s various geographies, ruralities and differences in urban development due to the blanket approach being used for the whole country.

Ofcom’s updated tool seems to be more of a stopgap, rather than a long-term solution to mobile coverage reporting. The revised checker does not appear to go far enough in addressing the issue of matching reported coverage and the lived experiences of end users. It also no longer differentiates between 4G and 5G coverage, instead looking at the two together, with broader coverage ratings being reported. This approach raises questions about to what extent consumers would be expected to understand the value of 5G relative to earlier generations, especially given recent rapid expansions of standalone 5G (5GSA) coverage and plans to go further already clearly communicated by operators.

Ofcom plans to continue improving its checker, eventually hoping to use more crowdsourcing data rather than relying on projected signal strengths. The regulator is also considering how resilience data could be included, as Arcep has done through the live reporting of network outages. The revised checker is, however, already leading the way internationally in some areas. For example, Ofcom’s crowdsourcing tool runs in the background of consumer apps, testing coverage periodically. This is a positive shift from typical crowdsourcing methods such as those used in the US, which require end users to test speeds manually. By comparison, Ofcom’s approach should yield a far greater volume of results, delivering more accurate data.

By prioritising accuracy, regulators can help better inform and empower consumers, and promote competition between operators

Increased testing accuracy is crucial to aligning reported statistics with end users’ lived experiences of mobile coverage. However, many regulators’ still tend to predominantly rely on operators’ signal strength projections, which may support comparability but in practice lack accuracy, using a limited amount of real-world testing, which can be more costly and burdensome to deploy. This trade-off between accuracy and cost is a common challenge faced by regulators, who rarely seem to implement the sufficiently wide range of testing and reporting options that would otherwise provide the most accurate, and in turn valuable, results for consumers. As a result, most international approaches are not yet accurate enough for consumers to make the most informed choices about which operators they use, with limited information available on network quality at peak times or coverage along major transport routes.

Arcep currently sets the benchmark in terms of prioritising accuracy, with the latest update to its mobile coverage checker (completed in July 2025) significantly improving consumers’ ability to access accurate information on network coverage and performance, with enhanced mapping using satellite technology, easily downloadable maps, clearer distinctions in 4G coverage quality and the ability to view the results of service quality tests broken down by operator. The revised maps also provide daily updates about unavailable masts that are damaged or under maintenance, and will adapt territorial statistics based on different areas, considering factors such as different elevations, the locations of masts and the technologies these support. Regulators should look to adopt the best practices taken by Arcep and others (see Figure 2) in order to empower consumers and to encourage operators to go further in strengthening mobile coverage.

Requirements for more accurate mobile coverage reporting would benefit consumers by increasing the transparency and accessibility of data on network performance and availability in the areas that they live and work. With this information, consumers would be better placed to make informed purchasing decisions when choosing or considering switching mobile operators. Improved accuracy would also enable local authorities to better assess mobile network deployments in their respective communities, strengthening their understanding of where additional infrastructure may be needed most and potentially encouraging more receptive approaches to future rollouts, particularly via the planning permissions process.

Such obligations could have a positive impact on the intensity of competition between mobile operators. For example, where coverage is more accurately reported to the public, operators may face stronger incentives to improve coverage where it is deemed unsatisfactory. Similarly, if an operator can see that its competitor’s network availability or performance is limited in certain areas, this may provide an indication as to where to deploy networks to offer end users a better alternative – potentially encouraging some to switch. The implication of these examples could be to drive additional investment from operators or, at the very least, inform operators as to which areas rollouts should target. The additional costs of more accurate testing would therefore be outweighed by the prospect of increased competition and investment, benefitting both consumers and operators, and potentially the economy at large.