Data analytics, especially real-time data analytics, will inarguably play a central role in SDN/NFV deployments, but without the right tools, the huge volumes of data spraying geyser-like from the multitude of fast-moving parts cannot be digested by businesses run by humans, thinking and deciding at humdrum human scales.
Increasing the rate of change
NFV and SDN increase both the complexity of communications networks and the rate at which they change by radically increasing the amount of automation in the processes, such as service provisioning and activation. This, in turn, creates a transition whereby data about the network's structure, connectivity and more goes from being measured in days to being measured in minutes or seconds. As a result, some decision-making functions that are currently made by people will be pushed down into the infrastructure which further increases the potential rate of change.
Other industries have undergone similar radical increases in automation, perhaps most notably, the finance industry in adopting high-speed trading. Now those industries are heavily reliant on data analytics to observe, measure and manage because actions in human-scale time just don't cut it anymore. There's no reason to believe that telecommunications should be any different in this respect, especially since some of the required machinery is already in place, underpinning data-heavy functions such as network performance management.
Building the right tools
Of course, it isn't news that two key requirements for successful data analytics, especially real-time analytics, are quality incoming data and the ability to join the data into a big picture which can be used for actionable intelligence. However, as an industry sector, telecommunications doesn't have a great track record with either of these.
So far, data quality and data integrity automation in CSPs has been mostly at the level of individual records or fields. But many of the data problems that exist in complex infrastructure require a better understanding of the meaning of the data in context. For example, recognizing that a set of numbers and labels isn't just a set of numbers and labels, but that it describes a service.
In addition, being able to measure the data's integrity currently requires a degree of manual intervention that won't be possible in the future. Ultimately, smart data quality automation needs better semantics and a domain model of some form, which is notably absent from most data quality tools on the market today.
Another key issue is that, of the "Three Vs" of big data (velocity, volume and variety), variety is considered by Gartner to be the most challenging and as a result has received little attention from toolmakers. While joining data up into a comprehensive picture of the storage, compute and network resources, virtualization infrastructure, services and customers across multiple administrative or technology domains in order to drive informed actions in real-time is essential to success, many existing management tools struggle to build a much less comprehensive picture.
Getting it right the second time
The good news is that not all industries facing sudden increases in automation have got this right first time -- witness both the violent equity market fluctuations caused by trend-following, high-speed trading shortly after automated trading was introduced and the ongoing research into "sub-second instabilities" that have appeared as trading velocity continues to rise.
As the ETSI NFV ISG enters the second phase of their effort to define an open framework for NFV network management and operations are a renewed area of focus, hopefully this will drive interest in more precise understanding of how NFV will instrumented and how the data from this instrumentation will be assembled into a useful whole.
However, CSPs will need to be confident that the programmable, virtualized, next-generation networks being imagined now can be effectively managed in the future. This confidence can't be achieved by frameworks alone. Equipment and software vendors, and operators themselves as they carry out trials, pilots and eventually the move to production on a large scale, should aim to be sure that when legacy networks transition to fast moving networks of the future, they have built the right tools to catch the necessary data.
— Leo Zancani, CTO, Ontology, special to The New IP