Get Ready For The Year Of Solid-State Storage

Set of solid state drivesIf this year’s CES in Las Vegas is anything to go by, 2013 is shaping up to be the year of professional solid-state storage. That might sound like a matter just for your storage team, but like so many other technology innovations before it, it will trigger changes right across IT from servers to the WAN.

The C in CES stands for Consumer, of course, and it was the tablets, smartphones, and smart TVs that caught the headlines. But as BYOD and other hot topics make clear, one of the biggest sources for innovation today is the intersection of consumer and professional. It shouldn’t be surprising, therefore, that CES features quite a bit of enterprise technology as well — and this year a lot of it was based around solid-state storage.

While solid-state storage made a lot of headlines last year and was picked up by many early adopters, it still suffered from two false assumptions: first, that it’s mostly a consumer technology, and second, that it’s a replacement for disk drives — hence the better-known acronym SSD, for solid-state disk.

This year should be different, for a couple of reasons.  Firstly, more and more companies are developing enterprise-grade storage boxes using solid-state technology. Some are even solid-state only, with no spinning disks to complicate things. And secondly, they are increasingly treating solid-state technology as what it is, not as a pretend hard disk.

Treating solid-state differently is important because, while SSDs are a great upgrade for an existing PC or server, they introduce layers of unnecessary architectural complexity. Modern operating systems do understand that SSDs differ from spinning disks, and that they shouldn’t try to defragment SSD or waste time optimizing the read/write queue to minimize SSD’s non-existent rotational latency, for example. However, they still expect their disks to, well, look like disks. So each SSD must also contain a highly sophisticated controller to manage its Flash memory or RAM, and map it to work like a SATA or SAS hard disk, with a physical SATA or SAS interface. That’s a lot of unnecessary context switching and added latency.

One alternative for server use is to put Flash memory on a PCIe board, as OCZ and others have done, but while that removes the storage bus, it still leaves the SATA or SAS protocols taking up resources. It can be a great way to soup up the virtualization host server that runs your software appliances such as Silver Peak WANop, though.

Storage designers can also map Flash and/or DRAM on a PCIe card as a form of cache, as Marvell has done with its new DragonFly NVDrive — this was one of those enterprise products premiered at CES — or like Fusion-io does with its ioMemory devices. As with PCIe-based SSDs, the cache approach needs drivers added to the operating system, but it can be a good fit for data and transaction-heavy applications.

Lastly — and this is where it gets even more interesting from a networking point of view — there are now solid-state storage devices that sit on the SAN or data center network, but which do not use SSDs. Instead, these all-silicon arrays — whether from established names such as Hitachi Data Systems or IBM-owned Texas Memory Systems, or from start-ups like Nimbus Data Systems, Skyera (which won a top storage award at CES) and Violin Memory — aim to leverage the inherent advantages of Flash memory or RAM while minimizing the performance-sapping layers of legacy technology.

Where will these fit, and what will it mean from a networking perspective? Well, if you are one of those early adopters who already picked up solid-state storage, you will know at least some of this already. We are talking about improved performance and greater capacities for virtual servers, virtual appliances, and of course database servers.

Assuming that the LAN and WAN capacity is there, this will translate into increased uptake and usage of new technologies and strategies such as Big Data, BYOD and private cloud. It could also play a key part in extending and hybridizing some of those technologies for even greater business benefit and flexibility, which is a topic I plan to come back to for my next column.

Image credit: scanrail (123RF Stock Photo)