If random signal quality samples are taken, you can estimate the
percentage of locations that meet or exceed the coverage design
threshold. The associated degree of confidence will then depend
on the number of samples you take.
A radio network has specified area reliability of 90%, and
coverage design threshold is -100dBm signal strength. You
require a confidence level – the likelihood of the testing
being accurate – of 99%.
The number of samples will depend on the reliability
specification and predicted reliability – the smaller the gap
between them, the more samples are required. Let’s look at some
possible outcomes, based on different numbers of samples.
Looking at the table on the right, the first two examples fall well
short in terms of confidence, due to their very small sample
numbers. The third example exceeds both reliability and
confidence, suggesting that the system may in fact have more
radio sites than necessary to meet the specified criteria.
SAMPLES ≥ -100dBm < -100dBm Measured reliability Confidence* Acceptable
10 9 1 90% 50% No
20 19 1 95% 77% No
200 191 9 96% 99.5% Yes
900 831 69 92.3% 99.0% Yes
*using estimate of proportions technique
The final example – with 900 samples and measured reliability
around 92% - meets the confidence criteria, and best represents
a realistic, well-executed Coverage Verification Test.
What happens if we increase the confidence figure further?
Diminishing returns set in quite quickly: 99.9% confidence
requires 2150 samples. That is a significantly greater sampling
overhead, so the sampling cost can get out of hand quite quickly.
So to sum up, the theoretical nature of coverage prediction
can provide only part of the story. Physical, location signal
measurements from a well-designed and executed coverage
verification test can verify it in a robust, repeatable and
“ … the theoretical nature
of coverage prediction can
provide only part of the
story. Physical, location
signal measurements from a
well-designed and executed
coverage verification test can
verify it …”