ORIGINALLY POSTED 28th February 2010
14,778 views on developerworks
There has been a lot of twitter banter, and blog posts about Tiering. Netapp claiming you don’t need to tier, EMC and IBM saying tiering is important and 3Par Farley going so far as to say that Netapp can’t do tiering easily, hence their response, and discussion of PAM being the way to resolve.
I had always been lead to believe the PAM (Performance Acceleration Module) was flash based, but when I did some googling this week, I was suprised to find its basically a PCIe card with some DDR modules on it. No flash in sight.
Anyway, this makes me wonder why you don’t just put more DDR in the main cache and then get the maximum benefit for both read and write cache. Not just read cache as Nick Triantos explains in this blog post.
Why stick it all the way out on a PCIe link, rather than direct into the CPU? Looking at the diagram further down Nick’s post you can see that it is using CPU bandwidth to read the data into the PAM. Knowing all about using Intel based CPUs in high performance storage appliances, to me, this makes no sense.
Think of the situation where you have to read some data into main memory to then send out via FC or CIFs/NFS etc and then copy this data into the PAM. So takes twice the bandwidth of just doing a single read. Now in a case where you don’t read that data again for a while, the data will be discarded from the PAM without a hit. So all you are actually doing is HALVING the CPU data bandwidth with no gain – hardly a Performance Acceleration Module – looks more like a performance degredation module in this workload.
Ultimately it just strikes me as a very strange way to depoly some extra memory into a box. And why still only Dual-core CPUs? Is there something in ON-TAP that means it can’t actually make use of a Quad-core? This is a question more than a statement, be happy to understand further, and be corrected if there are Quad-core based systems.
Back to the real point, SSDs are here, they are a step between DDR and HDD, but are still expensive, so we do need to tier, we do need to decide what data needs to be placed where, which is why IBM’s enterprise storage products will be providing the necessary sub-lun level dynamic tiering functions, namely in the DS8000 and SVC products in 2010.
Until we can afford enterprise SSD for all workloads (and even then nearline/SATA drives will still provide much higher densities) we need tiering. SVC has been providing heterogenous tiering since its original release in 2003… now comes the next stage… sub-lun tiering.
Leave a Reply