Hybrid services that span NFV and classical networking domains are either the wave of the future, or too complex, depending on who you talk to. And now, new evidence suggests that neither a hybrid nor an "NFV isolationist" view describes what's happening in the real world.
For some, hybrid services are expected to become an operational necessity, and indeed some vendors, such as HP have built this notion deeply into their vision for the future.
An alternative school of thought argues that the complexity involved in such services is simply not worthwhile, and that it's much more likely that operators will roll out new services entirely within network functions virtualization (NFV) domains and allow the "legacy" to wither away.
However, now that operator trials are becoming more widespread, evidence is appearing to illuminate the debate, and the signs are that reality has introduced a twist: Neither school of thought appears to provide a good match for what is happening in the trenches. That means that service providers are heading off-piste, armed with radical new technology and very little in the way of reliable maps.
A messy mix
Many early NFV trials and deployments seem to be with virtual customer premises equipment (vCPE). It's an attractive first NFV use-case for service providers partly because many vendors have led with vCPE, so the equipment and software is available, but also because it's relatively easy to make a compelling business case for vCPE. Colt and China Telecom are two examples in the public domain, with many others following suit in private.
Typically, enterprise CPE is a distributed management headache for service providers due to the difficulty and expense involved in software upgrades and configuration changes which require elaborate interaction with the customer, and keeping track of what's been installed, where and why.
However, vCPE alone doth not a service make. The end-to-end service includes things like inter-site WAN connectivity, which, in a "pure" vCPE roll-out is provided by the legacy network. The same is also true of most B2C services. Triple- and quad-play offerings are ubiquitous and mean that even for consumers, the CPE is usually only part of the picture.
So what exactly does virtualizing the CPE mean for the service overall? It means that the vCPE (and the NFV stack it runs in) becomes another square in the patchwork of systems, processes and data which service providers gaze into, like the tea-leaves at the bottom of a fortune teller's cup, to try to discern the condition of their services.
Will they realize a saving from this? Almost certainly. Does it fall into one of the two clear-cut camps that the debate revolves around? Not at all. It isn't part of a carefully considered "bicameral" architecture, where an uber-orchestrator fires off work-orders into classical infrastructure, and activation requests into the NFV stack, but neither is it a "pure NFV" service in which everything is managed in the automated, fault-tolerant, self-healing, virtualized environment.
It's a messy and pragmatic mix of the existing and the new, and this is entirely consistent with the history of technology evolution in the communications market. "Evolution" is the operative term here: service providers don't (or can't) do transformation and it is a naive position at best for the market to continue to expect them to.
Needing new focus
The reasons for this behavior are complex and not entirely understood, but it seems likely that the present-day economics of the communications market lie at its root. Shrinking margins mean less upfront capital investment is available, even when the outcome would be a cost saving. When combined with general skepticism about the ROI of big IT projects, justifying big spend on the promise of eventual savings becomes very hard.
As I've written elsewhere, on the rare occasions when this type of undertaking is launched, it often overruns (or even fails) because of the condition of the data in manually curated, and often very old systems.
The service provider response to this challenge is to mitigate risk, and in large-scale software and IT, that means evolution and incrementalism -- hence the curious mix of discrete service elements residing in the NFV environment participating in an overall service management lifecycle that is entirely old-school.
If this is how the journey of NFV to market is going to continue develop -- islands of virtualization increasing in size and hopefully, eventually, expanding to cover most of the services territory -- what does that mean for the designers, developers and prophets of NFV?
A new kind of focus is needed on the issue of transition. To date, the message to service providers has been to carefully plan and prepare for a substantial organizational and cultural upheaval, and either not to worry about the technical transition (because NFV will operate in isolation from existing network infrastructure) or to start architecting an OSS transformation to enable cross-domain hybrid service automation.
Yes, it's true that a great deal of cultural and organization change is required, and yes, that change will need very careful management. However, without a map for transition that accounts for an extended period in which virtualized service components are simply part of the existing ecology of systems and processes, the industry may well be headed for more of the chaos with which it is already all too familiar.
— Leo Zancani, CTO, Ontology, special to The New IP