OMG: Hu-itachi maths strikes back

ORIGINALLY POSTED 22nd October 2008

9,276 views on developerworks

Oh My God. I’m sorry, but OMG*!

Read cache hits. OK, so we can all do them, we can all do lots of them, but it seems only HDS, or maybe Hu (or whoever writes his marketing page that passes for a blog) thinks that Storage users are blinded by science. We don’t, and even EMC don’t on this one. There is little point in quoting read cache hit numbers, ok so it shows you roughly what a box is technically capable of, but for a real life workload you are probably looking at maybe at best one quarter of that throughput, and even then only if you are doing 512 byte transfers – does anybody do 512 byte transfers these days? How large is your average email… add in a few attachments, or that ‘nifty’?! pseudo signature at the end… ahem…

Anyway, it seems our (universal vendor agreement) over dodgy Hitachi Maths has completely gone over Hu’s head and he’s at it again.

900,000 IOPs… OK so lets think about this… that would be 3000+ 15K RPM SAS drives… how many does the biggest AMS 2500 support 480 according to this technical sheet. So we are about 2500 drives short here Hu… how does that work?

So since you are on this tack, I don’t normally quote read cache hits, but let me QUANTIFY that the recent SVC Entry Edition (in single node pair install) will do 750,000 read cache hits (512B) … does that mean we’ve come second here, or does that mean that HDS are simply providing marketing numbers that just don’t mean a thing in the real world. Of course you could cluster 4 pairs of SVC EE nodes and have a memory hit capability of 2.5M IOPs – but thats just nonsense as you’d need 8334 15K RPM drives behind such a system, and we all know that there is a big difference between sub-millisecond cache hits and multi-millisecond disk reads… So you are probably looking at 25,000 15K RPM HDD to provide that number of IOPs and the system itself would limit the IOPs due to the wait time…

Anyway, I’m sure none of you are fooled by this latest instance of ‘Hitachi Maths’ – one more of these and I’ll have to start a new ‘Hitachi Maths’ Wikipedia article as it is fast becoming Storage IT folklore….

Update : 23rd October

Another – OMG. Hu has actually acknowledged one of my, and Chuck’s comments and has even posted them on his blog. However he says :

Hu: The only way I know to do any type of apples to apple compare of different architectures is to cite the maximum read cache hits. It is not realistic from an application view but it does show the engineering capability.

Incase he doesn’t post it, here is my response to that :

Me: Hu, thats an interesting take. Read cache hits only show the “top” end of the box, into memory. Not out to disk, i.e. the capability of the algorithms in cache, destage, prefetch etc etc. I’d argue that a 100% write miss actually shows more about a box (technically) than read cache hits.

Also, is it not the case that (if this is the same as USP) then these are “super read cache hits” – so they don’t even leave the buffers of the fibre ports. i.e. there is no DMA involved, just a FC buffer to FC frame conversion – if that is the case, this is NOT AT ALL representative, even at a technical level.

What do you all think?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: