ORIGINALLY POSTED 16th April 2008
14,534 views on developerworks
I keep making excuses for not posting much recently, but I really have been very busy. One of the things I started what seems a long time ago now was a paper describing why, how and what we implemented back in the November 2007 release of SVC – with respect to Cache Partitioning.
Since no SAN in perfect, and you can’t always assume that a storage controller is going to behave nicely with its peers, or at least not hit hardware or other such failures, we decided it was worthwhile setting some limits on how far a single bad controller could influence the overall environment. Think of this as a dynamic and automatic throttling mechanism when your host systems start to ask to much from the physical spindles.
In general cases, it should make no difference to overall performance. It simply starts destage operations from the cache a little earlier when there is a lot of write data for a single partition – (managed disk group) – the more worthwhile cache resources (i.e. read data) is not partitioned / limited.
Anyway I’d had a few questions from customers and account teams and thought it was worthwhile spending some time writing up why and what we did, plus a bit of how it works internally. (Without giving too much away!)
PS. As I understand it, the lack of cache partitioning (despite all Hu’s shouting about global caching) is one of the main reasons HDS don’t recommend you use any of that global cache for external storage under a USP / USP-V.
This is still in draft release, so any comments, or further questions I haven’t addressed, let me know and I’ll roll them in.