ORIGINALLY POSTED January 21st 2008
13,232 views on developerworks
I’m in the middle of another take on the – Why Virtualize? story – looking at it from a subtly different angle – more of that later this week hopefully. But I have to quickly delve a little bit deeper into the concept of effective Tiering.
Flash based disks have been a tantalizing prospect up until now, and we all knew that once someone worked around the poor write performance / reliability issues then it would be a game changer in the longer term. Its almost like these disks have more IO/s than most us know what to do with. EMC have gone for the capacity and reduced response time play, which is probably all any of us can do with what we have today – this means the controller itself will be the bottleneck, and will only give you a fraction of the actual IO/s that each of these SSD’s can achieve. It will be interesting, if EMC actually release numbers, how well they do in the short term – until the price comes down and there is some serious competition in the drive vendor side of things – its probably only the high end Fortune 100 or so that are likely to be interested in Tier0 in this use, i.e. some capacity that goes very very fast. There are many other areas that can benefit from storage capacity that provides IO/s throughput somewhere between the speeds of RAM and traditional magnetic media.
though, SATA, SAS, FCAL, SSDs, do you really want it all in a big
expensive box. Not really. You want a much lower price point for the
hardware for your SATA controllers / backup and archive space. You want
some midrange SAS and FCAL disks and controllers for your day to day
user applications – email, development, testing etc. You want some fast
SAS, FCAL drives for higher throughput requirements – top notch you
want some SSDs for those business critical, high speed databases. You
don’t want to be spending the same footprint or processing overheads for
all of these. You want a higher level of control, most importantly,
whats hot today is not tomorrow, you need the ability to migrate the
data between all your tiers, you dont’ want to stick it all behind a big
enterprise box, you need to be able to sell a difference price point
between tiers for your internal customers. Buy a single box to do it
all, that costs essentially the same (not much difference between 10k
and 15k RPM, a bit of a break on SATA, but then your ‘hosting’ is still
in the big frame so the overheads outweigh the savings) You have to by
DS8000, DMX or USP to do your Tier0/Tier1 and zOS, buy a DS4000 for your
Tier 2, buy a SATA or SAS DS3000 or other cheap box for your
Thats what we hear customers want, not a big monolothic box that does everything. Because its big and monolithic it isn’t cheap – and should be used for what its best at – big monolithic work. But you need some way of getting data out of the box, you need something that can keep nearline backups still online, but you don’t want to waste space inside your big box for nearline. You need something like SVC to do the migration for you, without disruption, without downtime. You can even pull data back into the bigger boxes should you need their power.
I’m not knocking what EMC have done with DMX, SSDs and SATA, I’m just questioning if a single high end, expensive box should be tweaked to provide all your tiers, when there are many more suitable (price and performance wise) boxes for the lower tiers, and with virtualization you can pick and chose what data lives where on a daily basis – if thats what you desire.
I also don’t want to hear any FUD in response to this about DS8000 being dead, its healthy, was running out the door in the 4th Quarter and continues to be heavily invested in. I’m just questioning the sole use of a monolithic controller, we have the distinct advantage of having a product like SVC to ‘front’ all of your storage tiers, manage the tiers and easily provision disks from the tiers. I guess since EMC don’t have the same, they need to make the monolith do everything.