Oct 17, 2012
In early October, at the Interop conference in NYC, I gave a one-hour presentation on the impact of cloud computing on the network. That presentation highlighted the stark differences between the LAN and the WAN. For example, the LAN, particularly the data center LAN, is undergoing dramatic change. As part of that change, IT organizations are puzzling over whether they should replace the spanning tree protocol with shortest part bridging (SPB) or with TRILL (Transparent Interconnection of Lots of Links). At the same time, they are also puzzling over the best way to support the dynamic movement of virtual machines. Should they implement VXLAN or should they implement NVGRE?
In contrast to the LAN, the WAN is staid. In the early 2000s, IT organizations began to move away from Frame Relay and ATM and adopt MPLS WAN services. However, up until now, the conventional wisdom in the IT industry has been that there isn’t a fundamentally new technology in development that will replace MPLS. A key consequence of that assumption is that, on a going-forward basis, IT organizations will have to build their WANs using a combination of MPLS and the Internet. Because of that assumption, I spent most of my Interop session focused on the WAN discussing how techniques such as WAN optimization could improve both the cost and the performance of the WAN.
Software defined networks (SDN) have the potential to change the conventional wisdom about WANs. SDN isn’t a technology, but a way of building networks. While there is not uniform agreement within the industry on the definition of SDN, the general consensus is that it involves the separation of the control and the forwarding planes of networking devices, the centralization of the control planes and providing programmatic interfaces into the centralized control planes.
While most of the interest in SDN has been focused on the data center LAN, earlier this year Google discussed how it has used SDN in the WAN to carry traffic between its data centers. According to Google, when the project began, there weren’t any network devices that could meet its requirements, so the company built its own. It also built its own traffic engineering (TE) service. Its TE service collects both real-time utilization metrics and topology data from the underlying network as well as bandwidth demand from applications and services. The Google TE service uses this data to compute the best path for traffic flows and then programs those paths into their switches.
While still in the process of collecting operational cost data, Google believes that the final results will show a significant reduction in cost. Part of the cost savings will likely come from Google being able to leverage its TE service to increase the utilization of its WAN links and part of the savings will likely come from simplified management of their WAN.
SDN vendors frequently cite Google’s use of SDN in the WAN as proof that OpenFlow is ready for a range of production networks. However, the WAN that Google built using SDN is a comparatively simple WAN in terms of the number of sites, links, protocols supported, and traffic flows. Another simplifying factor is that many of the flows are scheduled, which makes the traffic engineering quite a bit simpler than if the flows occurred randomly.
The bottom line is that Google’s use of SDN in the WAN is fascinating and might serve as a precursor to the broad use of SDN in the WAN. However, SDN WANs will not be mainstream in the next few years, if ever. As a result, I will continue to focus the WAN portion of my Interop presentation on techniques such as WAN optimization and will encourage IT organizations to continue to explore how they can use WAN optimization more broadly.
Image source: flickr (jugbo)