Every Platform Needs Good Foundations

Crooked houseAs plans for 2013 kick in, more organizations are looking at adopting a new IT “platform” on which to carry their applications, functions, and services.  2012 was the year of cloud computing — at least from a vendor point of view, talking the subject up but seeing mainly proof-of-concept (PoC) projects and some toe-in-the-water stuff going on.

It is now likely that 2013 will be when we see a lot of “real” cloud projects being implemented — true elasticity of resources, with these resources being shared amongst multiple workloads hosted on the platform.

There is a key thing that is implicit — but often glossed over — in the previous paragraph.  Elasticity of resources includes all resources, yet the focus is often just on servers and storage, avoiding the knotty problem of what needs to be done at the network level.

Historical network topologies have been hierarchical, with all the problems that this can bring to a data center.  Much of the north-south traffic (that between an end-user’s device and a single application) is dealt with without too much of a problem, and the business therefore tends to believe that everything is fine.

The problems start when trying to deal with the east-west traffic — that which flows between different applications in the data center itself.  In a hierarchical network configuration, data from one application has to flow up to a common switch and then back down to the other application leading to high levels of latency, as well as major physical limitations that can be introduced through the use of the spanning tree protocol (STP), which can rapidly lead to the loss of physical ports as these are shut down to prevent data loops.

Link aggregation groups (LAGs) can be put in place to get around some of STP’s issues, but is really only suitable for a physical architecture — things get too complicated once virtualization is thrown in the mix.

A move to a cloud platform involves massive virtualization and a need for rapid changes in how the network supports the dynamic changes at the server and storage levels.  STP and LAGs are not the solution to this — this is where software defined networks (SDN) really kick in.

SDN enables the abstraction of the management and control planes of switches from the data plane.  This enables networks to be flattened, cutting back on north-south hierarchical traffic, and enabling physical ports to be aggregated and partitioned, as virtual ports at will.  A cloud platform built on SDN can therefore be far better optimized — available network bandwidth can be allocated and used to higher utilization levels; higher priority traffic can be given greater bandwidth, and different data types can have quality and priority of service applied through software rules, rather than through more constraining firmware-based switch operating systems.

The main problem at the moment is that the awareness of SDN is low in IT circles, and is essentially unheard of at the business level.  The good news is that the vast majority of network vendors have adopted SDN — mainly through an industry standard called OpenFlow — and therefore, just through natural replacement of existing network equipment, SDN capabilities will be available.

SDN is also not a forklift upgrade — it has the capability to be backward compliant with existing switches, so it can be used in a mixed environment of old and new switches.

Cloud computing is probably the most important change to enterprise computing since client/server came in: however, if the basic foundations of the network are neglected, it will just be a move from one bad platform to another.  SDN provides the needed foundations — and is available to all, now.

Image credit: Don McCullough (flickr)


  • bobpuces

    did you ever try LINUX cloud think basic ?