As I go round the event circuit, there is always the buzz around the latest topic de jour. Vendors love something new – it gives them a relatively blank sheet to start a new marketing campaign against, and a chance to try and take the high ground against the competition. The obvious ones over the past few years have been cloud computing and Big Data (it always seems to have to have the capital letters, otherwise people may not believe that it is really “Big”). The latest vendor fad seems to be “software defined” something or other. So what is the reality behind the hype?
Let’s start with the daddy of them all – software defined networks (SDN). Although the term was first coined in 2009, it really was only during 2011 that the market woke up to the idea of replacing proprietary network equipment control planes with software that could be applied as the technology changed and so allow hardware to be used for a longer period of time. SDN also allows for higher levels of interoperability between different makes of equipment – so a good thing, definitely. Now, the majority of network hardware vendors have bought in to SDN, and it is beginning to be deployed in the real world. However, Europe seems to be lagging a bit behind the US in its awareness of SDN: at a recent network event in London, no one admitted to having heard the term at all.
Then, when talking with a storage vendor recently, software defined storage (SDS) came up. Based on the same premise, the idea is to take all the storage controls and put them into upgradable, flexible software, so abstracting the storage capabilities from the hardware to a greater extent than has been achieved before. Again, I like this – the impact of virtualisation, cloud computing and the need for data flexibility across hybrid private/public cloud environments means that storage has to change with the times, and being able to do this at a software rather than a hardware level does point to greater flexibility.
Then, you have the software defined data centre (SDDC). Bandied around by EMC and VMware (still bosom buddies with a seemingly interchangeable management structure), this is gaining traction amongst commentators – even if the end-users still seem to be somewhat bemused by the whole thing. SDDC is meant to abstract the serving of business capabilities away from the nuts and bolts of the data centre plumbing. Again, laudable, and something that the world has been trying to do (unsuccessfully) ever since the first data centre was built.
I’m sure that deep in the bowels of various tech labs there are bearded geeks in flip-flops (OK – showing my age, these days more like pimpled youths in Nikes), more SDx three-letter acronyms (TLAs) are being dreamt up. Either that, or the same people are being told by marketing, “Hey, we’ve trademarked SDQ – quick, come up with something for us to use!”
However, my worries lie in that the teams working on these various SDx strategies are not necessarily speaking with each other. Surely, a SDDC is dependent on the capabilities of SDS and SDN? Without the SDS guys talking to the SDN guys, and ensuring that there is commonality between the different approaches, then SDDC cannot be functionally efficient nor effective?
Is the problem that we are really heading for SDC – software defined chaos? We’ve seen it before in how the web evolved, with different browsers going their own ways, the different standards groups coming up with slightly different ways of doing things. And we’ve also seen it more recently with cloud computing. If the SDx teams do not act now to stop an explosion of differences, then all that will happen is that each vendor will have their own – proprietary – implementation, requiring an over-arching means of tying everything together.
Anyone for EAI?