It’s hard to miss that this is the year of SSD, or solid-state disk. There is a certain irony therefore – or possibly, given its location, schadenfreude – in an event being ‘themed’ to keep up with the latest trends, only to find out that the new theme is already out of date.
So it was that the big story at Powering the Cloud – this year’s title for the Storage Network World Europe expo and conference in Frankfurt, Germany – was not the cloud but SSD, and indeed solid-state storage in general. From SSDs in everything from laptops to enterprise storage arrays, through solid-state acceleration cards for servers and memory caches for SANs, to hugely fast solid-state storage arrays for Tier 1 applications, the list seemed almost endless.
It is all down to consumer technology, of course. The flash memory used in enterprise gear is generally a higher grade than the flash that goes into digital cameras, USB sticks, mobile phones and pretty much any other electronic gadget you care to name, but the sheer volume of production translates to sector-wide economies of scale. That means enterprise SSDs too are becoming bigger and more affordable.
When properly implemented, the addition of a tier of enterprise SSD (today this mostly means flash memory, though there are other types both in use and under development) can yield significant benefits. According to Randy Kerns, senior strategist with analysts the Evaluator Group, replacing as little as two to four percent of a storage system’s capacity with SSD can double its performance. “I believe that can go up to five times with some code changes, for example to the depth of command queues,” he added. This in turn should speed up I/O-bound applications, of course.
Several speakers made the point that SSD is just the first generation of solid-state storage, and that its performance is artificially limited by the need to emulate a spinning hard disk in order to work in storage arrays that were designed to accept tiers of disks – fast disks for ‘hot’ data, then slower but fatter ones for less performance-critical data. The next generation of arrays will be designed to address solid-state directly, and should offer much greater performance boosts.
The implications for the network are considerable. Faster networked storage means more data moving around and at higher speeds. And if storage is no longer the application bottleneck, something else will be – either the CPU or the network.
Plus, removing storage latency could be all for nothing if distance latency – especially where a WAN is involved – means that the application server misses the benefit. WAN optimisation tools can help here, of course.
One other challenge is that the number of solid state options is almost dizzying, and with a new supplier coming along pretty much every month, it shows no signs of simplifying any time soon.
Perhaps the biggest practical differentiator between all these offerings is where they are designed to sit, which in turn is a factor of what workloads they are intended to accelerate. Some hang off the server, which is great for latency but not for sharing. The rest are designed for shared access in the network, either on the storage side in (or alongside, and pooled with) the disk arrays, or server-side and acting as a proxy of sorts.
This is as much a network optimisation challenge as it is a storage or applications issue. It means new network usage and traffic patterns, and developing a better understanding of the effects of where in the network you place your data.
Image: Sander van der Wel (flickr)