The New IP network of the future sounds like something out of a science fiction novel -- connecting billions of devices to the Internet; virtualizing everything; creating new services in minutes instead of months; substituting agile software and vast pools of server resources for traditional network hardware. It sounds exciting -- like winning some cosmic networking lottery -- but is it truly a blueprint for the New IP or just a vision in a vacuum?
The New IP framework
What all this new demand and new technology means is something we've not quite gotten our heads around. If new networking demands really are changing everything, then there has to be more to the New IP than virtual versions of real devices. We have to rethink how everything fits with everything else, and this new framework deals with the challenges of the present and the future.
So what does it look like? In the New IP, profitability doesn't just mean pushing bits around, not anymore. It means features, services, and it should be clear to everyone that it means the cloud. Ultimately, the top level of the network of the future will be defined by what users are willing to pay for, and those things -- whether applications or service features -- will be hosted in the cloud. Message number one for the New IP: Value migrates upward to the cloud.
The bottom of the network of the future is also clear -- it's increasingly agile optics and groomed electrical tunnels. You can't deliver services without those commoditizing bits, and so you focus your investment as an operator on the layer that creates them in the WAN and connects them in the data center at the lowest cost. Message two for the New IP: Connectivity and resiliency migrate down to optics and fabrics.
What's left in the middle is the service connectivity we have today -- Level 2 and 3, the switching and routing. These layers will be driven by the same thing that's making the cloud possible, which is virtualization.
Architecting for virtualization
Virtualization starts by defining abstractions to represent something -- a machine, a private network -- and then moves to create that "something" by deploying the model you've defined. Software-defined networking (SDN) and network functions virtualization (NFV) both propose a virtualization model and both rely on hosting software instances of functionality (controllers, open vSwitches, virtual network functions) in "the cloud." Naturally, as you virtualize more stuff you have to create more places to host it which means you're building more cloud resources over time.
Virtualization, in the form of SDN and NFV, gradually shifts useful features upward into the cloud. At the same time, adding agility, connectivity and resiliency at the optical layer will reduce the need for Level 2 and 3 to worry about things like error recovery. In the data center, this agility will be created through fabrics and SDN, and optical/tunnel connections will link data centers and everything in them. A new balance of value, features, and of course, investment will emerge.
Architecting for virtualization means not thinking of virtual futures in real-device terms. To saddle virtual router/switches with real device limits defeats the whole virtual value proposition. Real routers must be shared to make them cost effective. Today, we build VPNs and VLANs by partitioning real routers. Instead, suppose we could have virtual routers that already share resources by living in virtual machines or containers? If we could use either electrically groomed tunnels or optical paths to separate users' traffic, then we could then give VPN users dedicated virtual routers and partition users at a layer below.
The New IP needs a new architecture to take advantage of agility at lower layers, and virtualization at Layer 2 and 3 to create services that look to users the same as before, but that are disconnected from old and rigid device models at the network layer.
Operationalizing the New IP
The New IP also has to be able to operationalize this new architecture. If a virtual function can run anywhere, and even everywhere if we consider spawning copies for improved performance, then when it breaks, how do you know it's broken other than that the service stops? How do you send a tech to fix a virtual breakdown?
The dynamic abstraction-to-resources relationships of virtualization can't be managed the same way as physical devices any more than they can be architected the same way. Do we simply manage resources, impress services on them, and forget specific correlations and fault-management drill-down problem determination? If so, we have to redefine what five-nines means. If not, we have to figure out how to make and sustain the service/resource binding.
This is what the New IP is really about -- a new architecture that takes advantage of the changing technology of the cloud and optics to rebalance work among the classic OSI layers. It's also about a new operations model that lets us provide the kind of service-level agreements we used to have, even though services are now elastic, variable and almost extemporaneous. Things like IoT or 5G or regulatory policy don't define the New IP, they simply create the environment in which it evolves. It's up to us to do the defining.
— Tom Nolle, President/Founder/Principal Analyst, CIMI Corp., special to The New IP