A Load

Storing Up a Load of Nonsense

A LoadWe’ve had the server wars, with crazy speeds and feeds data being spouted by vendors.  We’ve had the network wars, with stories of how just throwing more bandwidth at a problem solves everything. And now, we seem to be in the middle of the storage wars, where numbers and “facts” are being thrown around by vendors, muddying the waters and causing confusion in the market.

The advent of flash-based storage seems to be at the bottom of this.  Don’t get me wrong —  I am a firm believer in flash-based storage and the impact it could have on the storage world — but I am a little bit fed up with the approach some vendors are taking.

My main gripes?  Read on…

1)      IOPS.  Input/Output Operations Per Second.  This should be a relatively good way of comparing one storage system against another.  However, some vendors are using internal IOPS — or the speed that data can be moved within a storage array — yet as soon as you move off the array, the IOPS drop alarmingly due to poor controller architectures or other issues.  Also, the IOPS figure can be massaged by using different block sizes — and the vendors will generally chose the block size that suits their equipment — and this is unlikely to be the same as your real-world workload needs. When talking to a storage vendor, make sure that they provide you with meaningful IOPS figures that allow you to compare like with like.

2)      Capacity.  A terabyte (TB) is a terabyte, yes?  Well, actually it never has been, but at least all vendors have played the same approximate game in comparing what they see as a TB against each other.  Now, however, too many of the flash-based vendors are using “effective” capacity — where they are using intelligent data compression and data deduplication to get the best capacity by lowering the amount needing to be stored by up to 80%.  This enables them to say that they can compete on price against equivalent spinning-disk — but this is only the case if the spinning-disk hasn’t applied the same compression and deduplication.  If the same approach is taken on both platforms, then you will still need the same capacity of flash-based storage — and this can have an alarming impact on price.  Again, make sure that the vendor is comparing like with like.

3)      Lifecycle management.  I have had astonishing discussions with flash-based storage vendors who believe that their product will do away with the need for any tiering of storage.  As their flash storage is so fast, then it takes over everything from tier 3 upwards in one resource  pool — well, they might make an allowance for something long-term like tape for deep archival, but that’s all.  Pointing out that this means that their current portfolio has to be the ultimate product (in the correct sense of the word) leaves them nonplussed.  As flash is so fast, then tiering isn’t needed, surely?  But if your next generation flash is faster than this generation, then tiering will automatically be needed.  Make sure the vendor understands that tiering is a necessity — and make sure they have plans to support it intelligently.

4)      All flash, hybrid flash, and server-side flash.  There are storage workloads, and then there are storage workloads.  Some are better suited to a storage system which uses flash as an advanced cache; some will be better suited to an all-flash array.  Some workloads may need server-side flash, such as PCI-X cards from FusionIO.  However, bear in mind that whereas SAN- or NAS-based flash can be virtualized, doing the same for server-based flash is a little more difficult.  Either architect to avoid the need for server-side flash virtualization, or look to a vendor such as PernixData to provide a server-side flash hypervisor that adds the required intelligence to mitigate any issues.

These are my main bug-bears in the current storage markets.  I could go on (and probably will in another post).  However, what it really shows is that the Romans were right: it is a case of caveat emptor (buyer beware).

Image credit: Ian Barbour (flickr)

About the author
Clive Longbottom