ORIGINALLY POSTED 7th September 2008
11,310 views on developerworks
I was excited about the press release and finally making public news of our work with SVC and NAND flash technology, but I wasn’t prepared for the sheer scale of the press coverage we would get!
Just google “IBM quicksilver” for an idea of the coverage – the first day of the release there was almost nothing, now over 800,000 hits…
There’s a few independent articles, many re-hashes of the press release, and even my discussions with Mr Burke lead to an article on EETimes. We are doing well at the moment though, with EMC advertising out products for free, and although it maybe a ‘science experiment’ in Mr Burkes eyes, our customers are seeing the potential – I’ve had many requests along the lines of ‘when can we have it’, ‘we’d like to work with you on this’, and ‘can we come and see it’.
One of the key points I wanted to make in this post was to clear up any confusion / attempts at black-listing IBM, when it comes to the SPC fair-use rules. Mr Burke implied in his ‘science experiment’ post that we were claiming this as a non-standard SPC benchmark. Its not an SPC benchmark at all. As I clearly stated last week its an internal benchmark of 70% read 30% write, all miss, at 4KB transfers. For some time this has been known in the industry as a ‘typical’ workload for an open systems database workload, where the filesystem uses 4K pages. We fully understood the implications of SPC fair use rules, and my close colleague in Tucson, Bruce McNutt (co-author of many SVC white papers and one of IBM’s SPC representatives) was in Hursley in July to help us run various internal benchmarks. At no point did we plan to release SPC like or SPC comparison numbers. We simply run the same 70/30 4K test on an SVC system that was comparable to the same system which we did submit to the SPC for auditing.
Any vendor can state amazing IOPs numbers. Flash vendors do this, storage vendors do this and everyone knows to take them with a pinch of salt. A read cache hit number is interesting, but doesn’t really tell you anything more than how well the upper level of a code stack performs. Sometimes even hardware. Hitachi claim over 3 Million IOPs from USP-V – but that’s their ‘super-cache-hit’ where the same data doesn’t even leave hardware buffers on the host attached controller ports. Anyone can provide those kind of numbers and we didn’t want to risk being tarnished with the same “so what” brush. The 1 Million 4K IOPs were real operations all the way down to the physical storage medium. With 70% of reads, and 30% of writes – not DRAM, no hardware and no controller caching. Just pure physical IOPs. The comparisons made in the press release were therefore against the same benchmark (going down to physical 15K RPM HDD and missing the SVC cache) using an 8-node SVC cluster with the same number of HDD and logical configuration as the system we used to perform our latest SPC-1 benchmark.
The key point here is that we are comparing a non-SPC benchmark on SVC with a non-SPC benchmark on ‘Quicksilver’ – which allows the comparison to be grounded with a known world-record breaking HDD solution, SVC.
Anyway, its been fun watching and responding to the media storm we created, and its now 6 for 6 out of Mr Burkes last 6 posts that have been discussing IBM products – rather than his own… defensive… it would seem so. Free marketing… for sure… Highly amusing… you bet.
Next week all will become clear. The week after that I will be attending our Hursley 50 years of Innovation events, CIO’s, press and academic sessions. For those of you visiting Hursley we will have a ‘Quicksilver’ system on demo and I’ll be helping man the storage podium, so come by and say hi.