Pricing variation or price discrimination was on key topic raised during the question and answer portion of the seminar. In particular, mention was made regarding certain statements apparently made by NCTA's CEO and President Michael Powell regarding broadband pricing and network congestion.
Rather than attempt to speak for Mr. Powell -- who was not present at the seminar -- consider what he said at FSF's Fifth Annual Conference. FSF recently released the transcript for the Conference panel titled "The Right Regulatory Approaches for Video Service Providers." To let Mr. Powell speak for himself, here is an excerpt from the transcript:
[T]here are three or four rising problems to which usage of variable pricing models may be beneficial. And I can emphasize "may be."
First thing, this country has an adoption gap. We are persistently stuck with a hundred million Americans who have access to broadband but are not subscribing to it. We can have all kinds of healthy debates about why that is. One of the things that price discrimination often does well is it helps penetrate parts of the market that heretofore have been unwilling to come on the Net. The greatest threat to the United States is more than the silly debates about where we rank in the world. It's whether all of our citizens are online, have universal access to that capability. And if price discrimination can create tiers that are more affordable and more suited to the needs of that hundred million people and get them on the Net, that would be a major achievement.
That's one thing. The second thing is, when the Internet started most of us probably did roughly the same kinds of things. What we're seeing happen as the Internet grows and matures is there's a wider variation coming on about the way people use the Internet. There are power users who use massive amounts of data and gigabytes. There are those who love to cut the cord and do NetFlix streaming, and there are still plenty, probably 80% of the mass market of users who do very low bandwidth things: e-mail, Facebook, Skype, Twitter. These things do not use substantial capacity.
So as we get wider variety among the users, you do have a subsidization problem. You have people who are all paying the same price and getting different values of use. Frankly, the power elite user is enjoying the benefits of the subsidy that's being masked by an unlimited pricing model. That is not to say that model isn't simple and predictable, and you might like it for those reasons. But it does mask that cross-subsidization that in another context we worry about.
The third thing, it's not a congestion argument. It's important to just jettison this, because we're really not being honest. We're not really talking about congestion. I'll say it over and over and over again. We've been saying it for a while, but it still gets cited as what we're doing. What we're doing is what any company does that has massive, fixed costs.
We often hear our profitability talked about while people ignore completely the cost of building and maintaining the network. The network is a $200 billion expense over the last decade, and it takes $30 billion a year across all broadband providers to keep it going. That includes digging up the ground, laying wires, and keeping those wires current. You have to sink that money in the ground before you're paid one dime from a subscriber.
The question is, when you go to recover those costs, what's the fairest way to allocate those costs among the people who buy your service? If you have people who use it a little, should they pay the same as the people who use it a lot? Or, should you have the people who use it a lot pay more than the people who use it a little? That's what we're really trying to figure out: the fairest way to allocate the cost of a high fixed cost network.
And the last thing that I don't think is talked about enough is that bandwidth is not an infinite resource, whether it's wireless or wireline. You can get congestion. You can get overloading. What we have to do is make sure everybody has incentives to build for efficient broadband use. We have to do it as network engineers. But, right now, a lot of apps providers, service providers have absolutely no incentive to design their application or their services in a way that will use as little bandwidth as required. Why should they?
They don't have any cost to really deeply internalize as a consequence of it. It's like when Windows used to write software code, it could be more and more bloated, and as long as Intel kept making faster processors, it didn't matter. But, I assure you, if we go to 100 GBs, or a trillion gigabytes, software can bloat to meet that demand if there are incentives for efficiency. And, if you want that NetFlix steam to continue at high capacity, they should also have to be concerned about efficient algorithmic design.