The Spectrum Is Not Enough


No, this article is not about a new James Bond movie (see “The World Is Not Enough”) protecting the world from the takeover of all spectrum resources by an evil dictator, or a terrorist organization, or one power-crazed tycoon. However, while I have devoted much time and effort to discussing what may be the more as well as the less preferable polices and tactics to ensure that the best possible use is made of the spectrum that is available for mobile communications, it is time to emphasize that spectrum alone is not sufficient.  The motivation for pointing out the limitations of spectrum, which should be critical for anyone concerned about networks, whether wireless-based or not (i.e. everybody), is that there are too many statements flying around of the kind, “The Internet is going wireless,” or “ We know that broadband connections will become wireless.”

Neither statement is true, nor can they ever become true if meant to indicate that as in voice communications it will be possible for individuals, if they so wish, to rely entirely upon broadband wireless or mobile access and give up, or not bother with a fixed access subscription. The second statement is at least valid in the literal sense that there will be, and already are in some locations, more wireless than fixed broadband subscriptions. However, this statement is not the same as saying that broadband wireless connections are alternatives to fixed ones which can substitute for the latter to meet the majority of broadband demands. Until and unless the current laws of physics are invalidated in ways that remove current limits on spectrum capacity such as are embodied in Shannon’s Law, the future will see: (a) The vast majority of broadband traffic (as distinct from numbers of broadband subscriptions) continuing to be carried (delivered and transmitted) over fixed access networks; and (b) Demands for broadband traffic from wireless or mobile subscribers outstrip the capacity of all the bandwidth available for radio access networks to handle it, even with the use of the new spectrum that can be allocated and the deployment of more spectrally efficient technologies. A very relevant and related fact is that the bandwidth within one optical fiber is vastly greater than all the bandwidth that might theoretically be made available for mobile communications, even if every megahertz were to be refarmed for mobile services. A single mode fiber has a bandwidth of as much as 100,000 GHz, or 100 terahertz, whereas total valuable spectrum for mobile communications provides bandwidth of no more than at most 3 GHz.

Implications of “Laws” and Forecasts

There are two other “Laws” and one forecast that are relevant in this context, although unlike Shannon’s Law neither of these laws are linked exclusively to basic science and hence enjoy the stature and longevity or for now irrefutable validity as the latter. They are: (1) Nielsen’s Law of Internet bandwidth, that a high end user’s connection speed grows by 50% per year; (2) Moore’s Law, that the number of transistors on a computer chip, which is a rough measure of computer processing power, doubles every 18 months; and (3) Cisco’s forecast, that the volume of mobile data traffic will increase by almost 40 times over the 5 year period from 2009 to 2014 (or about 110% per year). Nielsen’s Law, first announced in 1998 has held true since its proclamation, while Moore’s Law, which was first described in 1965, has been followed very closely until today.

Presumably at some point both Nielsen’s and Moore’s Laws will break down, unless there is an unpredictable breakthrough along a new technological trajectory. Improvements of a specific measure of performance can eventually run up against techno-economic limits, just as for example the speed of commercial air travel has not increased significantly since the 1960s. Today it is believed that Moore’s Law should hold until 2015, although of course all forecasts of this kind may be rendered inaccurate by future developments. Moore’s Law supports Nielsen’s Law since the greater the power of available computing devices, the more likely it is that they will support and attract the use of data- and processing power-intensive applications and services that generate corresponding increases in communications traffic. If we assume that Nielsen’s Law will hold for 5 more years, then high end connection speeds will increase by a factor of about 7.6 over this period, and 58 times if it holds for the next ten years. Over the same period, if Cisco’s forecast is accurate, mobile data traffic will grow by about 40 times over 5 years, and by over 200 times over a period of 10 years if the annual average growth rate in traffic slows to a “mere” 40% per year over the second 5 year period. Under these conditions there is no way that a combination of additional spectrum and the deployment of more spectrally efficient wireless technologies will be able to handle the volume of mobile broadband traffic that may be generated within the next decade.

Nevertheless these figures are not definitive enough for building and then placing all bets on a business plan on the assumption that they will come to pass, without serious consideration of alternative scenarios and contingencies for alternative paths. Perhaps they may prove to be substantially inflated or over-optimistic, like the projections that led up to the internet and telecommunications “bubbles” at the beginning of this century. Nevertheless  these numbers do justify careful evaluation of their implications in relation to the maximum increases in mobile network capacity likely to be deployed over the next 5 to 10 years. Increases in network capacity will depend on a combination of the deployment of more spectrally efficient wireless technologies and the availability of new spectrum in which to deploy them. Under the most optimistic assumptions there may be twice as much spectrum made available over the next decade as is currently allocated, including all spectrum below 3 GHz which is suited and either already allocated to, or potentially remapped and refarmed from other uses for mobile services. The deployment of new technologies might increase capacity per MHz by at most 3 to 5 times. So an increase in network capacity of as much as ten times – but no more - may be achievable.

Congestion caused by the use of smartphones has already been and may well remain a limited phenomenon confined to a few (but very economically important) locations at particular times during a day. Nevertheless a comparison of the potential likely maximum achievable increase in mobile network capacities and possible growth in demands based on uses of smartphones and broadband wireless-equipped laptops, tablets and other devices indicates that congestion is likely to become a more common and widespread phenomenon than the instances of iPhone-driven bottlenecks reported by AT&T in a few U.S. cities and by O2 in London. These bottlenecks stimulated additional investments by AT&T in backhaul capacity and refarming of its 850 MHz frequencies with HSPA to deliver more capacity within the congested locations involved, such as parts of New York City and San Francisco.  

Arguments against a Spectrum Shortage

There are of course counter-arguments that tend to refute the expectation of a looming and substantial shortage of spectrum to accommodate mobile broadband traffic in the locations of highest demand. Another “Law” can be invoked, namely Cooper’s “Law” about wireless capacity, an observation by the mobile phone pioneer Martin Cooper. This Law refers to the number of "conversations" (voice or data) that can theoretically be conducted over a given location in all useful radio spectrum. It turns out that this number has doubled every two-and-a-half years for just over a century.

According to ArrayComm, a company founded by Martin Cooper, the technological approaches that have fueled this remarkable increase in the capabilities of wireless communications can be broadly categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum

Of the million times improvement over the past 45 years, roughly 25 times can be ascribed to being able to use more spectrum, and 5 times can be attributed to the ability to divide the radio spectrum into narrower slices, i.e. frequency division. Modulation techniques like FM (frequency modulation), SSB (single sideband), time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so factor in improvement. The remaining sixteen hundred times improvement has been the result of confining the footprint used for individual communications to smaller and smaller areas, i.e. the cells in cellular networks that permit spectrum re-use.

The importance of spectrum re-use for making more effective use of the spectrum is even greater than reflected in these figures. Frequency division and various modulation techniques have generated about as much progress as they can. Further gains are expensive and limited by Shannon's Law, which sets a hard limit on the amount of information that can be delivered in a given bandwidth with a given signal-to-noise ratio.

There is no comparable theoretical limitation on re-use of radio spectrum. Wireline networks can be expanded almost indefinitely through spatial re-use by installing more lines, each with more bandwidth, to more terminals. If in an analogous manner reliable broadband wireless links could be constructed between any two points, with independent connections to points separated by only a few feet, then the effectiveness of spectrum use could potentially be increased by up to millions of times over today's capabilities. If this progress occurred at a rate of doubling every 2.5 years, in another 60 years the entire radio frequency spectrum or a substantial proportion thereof could be delivered to every single individual. In this ultimate scenario every mobile user could have access to up to one to a few GHz of bandwidth, assuming that cognitive radio techniques enable access to all available (i.e. currently unused) spectrum on demand, even to those frequencies initially allocated to and destined on a primary basis to other uses than mobile communication, but are in practice only rarely occupied.

This line of reasoning may be used to justify the conclusion that with a combination of new technologies and regulatory approaches to the use of spectrum there should be no spectrum shortage to carry even the huge amounts of broadband traffic that each user may generate and receive in a future as envisioned by Cisco.  However, other realities – economic, operational, timing, and competitive – must also be taken into account, as well as the observations that: (1) Much mobile use actually takes place (40% or even up to 80% depending on the location) within buildings in fixed or nomadic contexts from the user’s perspective, and (2) A small proportion of users account for a disproportionately large amount of the traffic in a form similar to if in practice quantitatively different in detail from the 80:20 Pareto principle. It seems highly improbable - think of the procedures involved and the length of time it takes just to allocate additional spectrum today - that a combination of drastically new design and regulatory principles for the allocation and use of spectrum could be implemented in time to handle the already rapidly rising tide of broadband traffic even if the technologies were to become commercially available and affordable in the near future, which is itself highly improbable. Nor is this scenario likely to be realizable in a way that is competitive or more effective and less expensive than the deployment of new fixed access networks, with intrinsically much greater capacity in terms of Mbps/km2, to reach the majority of populations and concentrations of economic activity in urban and suburban areas, i.e. the locations in which the majority of revenue-generating users spend most of their time (“dwell” time during which much “mobile” communication takes place).

Beyond Spectrum Alone

If new radio access networks alone are not sufficient, mobile operators have several other ways in which they can mitigate the risks of congestion to complement their exploitation of greater bandwidths and the deployment of more spectrally efficient technologies (plus increased capacity in backhaul and core networks), including:

  • Off loading of  mobile traffic onto the fixed network via Wi-Fi, other short range (in-building, in-room) wireless, and femtocell connections
  • Application of web and content optimization techniques to reduce traffic volumes
  •  Management of subscriber usage patterns via new pricing models and acceptably non-discriminatory traffic management techniques.

The latter two approaches are intrinsically limited in their long term effect, since traffic management and content optimization techniques can at best reduce peak traffic loads by up to some fixed percentage that will be reached relatively rapidly. In contrast the first approach (offloading) can continue to absorb higher proportions of mobile device- generated and delivered traffic over the long term as the capacity of fixed access networks grows thanks to the increasing deployment of optical fibers and of the coverage or availability of short- and ultra-short range high bandwidth wireless connections to fixed network access points. Furthermore incentives for offloading can be concentrated on influencing the usage patterns of the minority of users who, as noted above, account for a greatly disproportionate share of total traffic.

Any credible long term broadband plan at the level of a country or of an operator must carefully coordinate the deployment and interworking of new fixed broadband access as well as broadband wireless access networks. The heavy emphasis and focused excitement on investment of the past decade or more on mobile networks must be rebalanced towards placing a greater weight on new fixed access infrastructure as well, if the aspirations of countries, innovators, and users with respect to broadband are to be realized.