Morcombe and Wise

ORIGINALLY POSTED 3rd July 2008

13,783 views on developerworks

I thought after the 4.3.0 release I’d have a bit more spare time to post more often, however I’m working on a couple of interesting projects that have really taken off over the last few months and I’m once again flat out, not to mention being asked to start benchmarking things already waiting to be added to the 4.3.1 code stream…

I do want to take some time to explain why most of what Mr Burke has tried to imply about SVC and its SEV is really just the somewhat unsurprising FUD posting. I wouldn’t have expected anything else… I’d prefer to liken the two Barry’s more with Morcombe and Wise – with me being the latter obviously πŸ˜› lol

Meta-data space usage. Less than 1% means just that. Its what we are recommending as a ball-park for customers to think about – mainly to make sure they realise there is an overhead. Mathematically its actually 0.1%.

Performance, yes I have done the tests, yes I do know the numbers, and yes I will make them known, but all in due course.

As for the total and utter FUD statements (I keep thinking we need a FUD2.0 type acronym, as some vendors throw FUD around, and others are the masters of FUD) aimed at trying to dis-credit SVC’s ability to virtualize and SE-virtualize storage. Now I am lead to believe that most of DMX’s ability lies around its ‘secret sauce’ caching algorithms, detecting application workloads and switching to modes to make the most that – and thats great if it knows you application. Its not so great when it doesn’t and there are a lot of applications out there that don’t have a unique signature of I/O patterns etc… in these cases you see DMX for what it can really achieve – for example SPC-1 actively tried to negate such algorithms, which is what I personally believe to be one of the key reasons EMC withdrew from the Storage Performance Council and fail to submit their products for independent audit – lets not go there again just now… Anyway, BarryB is trying to suggest that SVC does weird workloads that storage controllers cannot understand – but to storage, SVC is just a host. Something that is submitting blocks of data as writes, or requesting blocks of data for reads. I don’t see the issue? Maybe the real issue is that yet again BarryB is trying to protect the monolith and hang onto the paradigm that the controller is the only place to do stuff. With SVC we’ve proved thats not true, and our many customers are benefiting every day from not having to buy big monoliths, and can tender for their needs across the vendors they desire.

The final point he makes about dumbing-down the storage, he’s made a perfect example of how you wouldn’t setup an SEV implementation. If you are doing large sequential reads and writes, the you’d pick a larger 256K grain size for starters. The data for a given vdisk will also come from the same physical array, contiguously, for the size at which you set the group extent size. In some cases up to 2GB before moving on to the next array. So the controllers at the backed will see the sequential stream of 256K requests for at least 2GB, then if they are good at detecting a sequential stream, they will jump to the next array and start pre-fetching for another 2GB… (Why would you be storing mp3’s or videos on a DMX or DS8000 anyway? Even if you are a VoD customer, you want lots and lots of cheaper storage)

The is always going to be cases of workload and applications that don’t work well with one grain size vs another – or even some cases where you may not want to use SEV at all. I explained all that in my posts and I guess I could promote Mr Burke up to the “Wise” status for spending the time to find a few example of what not to do in order to help his FUD-machine.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: