Is Software-Defined Storage A Solution In Search Of A Problem?

Software-defined storage, or SDS, seems such a great idea. After all, when we are faced by a new project most of us still think in terms of how many drives to buy for it, yet the fact is that we can never know what that project will need as it evolves, so we almost always make the wrong decision.

Software Defined StorageIf we are lucky, that wrong decision is over-provisioning rather than under-provisioning, and we can write it off as “room for growth”. It would be better, though, if we could have the storage side as flexible and agile as the rest of IT is becoming, thanks to the same technologies that also underpin cloud computing: virtual machines, virtual networks and so on. But what is missing in most cases is an equivalent level of virtual storage.

Software-defined storage, or SDS, has a lot going for it, in theory at least. Like SDN and all the other SD-somethings, it promises to simplify administration, reduce costs and bring greater agility. It is, as a cloud storage exec recently described it to me, “the opposite of hardware-bound storage.” The problem is that it is not clear whether any of those promises are actually realistic – SDS might work for technically-proficient early adopters, say, but what of the mainstream?

Perhaps the classic example of SDS is storage virtualization, which provides the insulating layer or shim that breaks the direct link between logical volumes and physical storage. Just as the hypervisor does for virtual machines, it makes your logical volumes independent of the underlying hardware. Done right, that means they can be replicated, mirrored, migrated and so on, all transparently to the file system, the servers and the applications.

Yet open systems storage virtualization has has met with relatively little acceptance. How can that be, when the mainframe world has used all sorts of virtualization (including storage) for decades, and the likes of IBM, EMC, NetApp and Hitachi have all implemented significant levels of storage virtualization within at least some of their storage systems? (I do hope it’s not down to a lack of public awareness, because I have been writing about the topic off and on for rather more than 10 years.)

To give you an idea of just how far behind the curve storage virtualization is, in a recent presentation to the European storage industry, IDC’s storage consulting director Nick Sundby included a slide that listed ten hot technologies along with his estimates of how long each of them spent in an ‘early adopter’ stage before going mainstream. The modal time for the early adopter stage was four to five years, but for storage virtualization it was more like eight years – and even that might be an underestimate, because his chart only began in 2006 and I know it was around with some early adopters before then.

Part of the reason was that for much of that time, open systems storage virtualization was a solution in search of a problem. Yes, those early adopters recognized that they had problems that it could solve – cost reduction, simpler admin, and so on – but for most people, it was too complex an investment and the returns too unclear.

One result is that almost every modern implementation of storage virtualization is a tinned one, by which I mean it is packaged as part of a largely-closed hardware appliance or subsystem, not as an open software-defined system. For instance, of the two start-ups which got the most attention for their storage virtualization software in the early 2000s, one – Falconstor – pivoted not too long after. It instead hid the technology as a key enabling layer within dedicated appliances for solving specific problems, such as virtual tape libraries, data migrators and network storage servers.

The other, DataCore, carried on developing its SANsymphony software as a management platform for storage area networks. However, it too recently added hardware-based appliances, partnering with Dell in the US, and in Europe with Fujitsu. The implication was that while early adopters and system integrators might be willing to buy software and build themselves an open virtualizing SAN controller, DataCore had concluded that in order to reach the mass market it needed complete (and comparatively closed) ready-to-go storage systems.

Will SDS, SDN and the other SD-somethings as a whole follow suit? Sometimes it looks that way – several of the highest-profile implementations are nominally based on open standards, but actually look very much hardware-dependent. And yes, there is OpenFlow for SDN, but how many organizations have the time and resources to implement it themselves? Most of us will probably look instead for ready-made boxes to do it for us.

Are we therefore starting to (re)discover the limitations of the open systems approach, where the complexity involved in integrating all those disparate elements begins to outweigh the advantages of openness and multi-vendor sourcing? Well, maybe – but even if we are, I can’t see it derailing the SD-train. The management advantages are just too great, and as the SDS developers build ever closer links with the top hypervisors, the need for that storage virtualization layer is going to become ever more obvious.

About the author
Bryan Betts