ORIGINALLY POSTED 11th September 2008
21,933 views on developerworks
The interop just keeps growing. As you would expect you can today attach the new IBM Storage products behind SVC, running the latest 4.3.0 software.
SVC has actually supported XIV since August, we completed the controller qualification during our last SVT cycle, and the IBM Storage teams in the US completed the qualification of the DS5000 on our behalf.
Why would you want to put XIV behind SVC? Well the same reasons you would put any storage devices behind SVC. You gain all the advanced functions that SVC can provide on the backend storage. The ability to migrate data into XIV in a non-disruptive way as usual, the ability to backup, archive, migrate, replicate data from the XIV to other controllers. The performance enhancements over normal SATA based controllers allows for a much larger capacity pool to be attached to SVC while maintaining the performance of enterprise disks. The simplicity and ease of use of the interface means you can get this storage up and running behind SVC in next to no time.
The has been a lot of FUD over the last few weeks – coming from those that feel they need to spread such tales.
On the other hand there are positive articles, I think Steve Duplessie’s article is a great read.
Here are a couple of the more important un-truths/comments that have been spread
- 1-way cache memory without mirror writes
Not true. By the time the data is in cache its already been mirrored to another data-module, so the cache in each module is 1-way, but there are two copies.
- No GlobalMirror
This is true, however, as this post discusses, we have a workable, and industry leading virtualization device that provides Metro and GlobalMirror, so we can provide this functionality, and of course the remote site does not need to be an XIV array, it can be any supported SVC controller – much more flexible.
- Lack of non-disruptive upgrades
I understand that the r10 software does provide a means (the same as we do in SVC) to upgrade each module in turn non-disrutively.
- “How much does a free XIV array cost”
The general gist of BarryB’s post is to suggest it uses more power than comparable systems. I’m not going to start arguing the point nor his maths, I presume he made sure they didn’t involve any ‘Hitachi Maths’ – but one question you may want to ask back in response is how much overhead in power is DMX adding to those EFD’s It maybe all well and good to spout the energy savings that flash can give at the EFD level, but what about the ‘wrapper’ how energy efficient is a DMX with 32 flash drives? Just a counter point. Also in this post he comments of all data flowing over a 1Gb link and it implies this is a single link – there are many 1Gb ethernet ports in use. He also suggests there is no upgrade path for existing customers – there is a migrate in mode that allows you to suck data into the XIV – and of course if you have SVC above it you can do this everyday with any disruption once its behind SVC, and with minimal downtime as you “image mode” import the volumes.
- Double drive failures
These are nasty no matter what the controller. RAID-6 allows for more protection, but the extra P+Q parity rebuild workloads are just as likely to result in a 3rd drive failure occur. I’m no statistician, but I’m sure there are probably thesis out there looking at the likelyhood. The minimal work being spread over a large number of disks and the key thing being the greatly reduced rebuild time (minutes rather than hours) means there is less of a chance of this occuring. Most stroage controllers provide some level of data striping, array pooling, over and above the RAID protection. I’m sure there are many many thousands of users out there using RAID-5 with striped volumes above multiple arrays. What happens if two drives fail on one of the RAID-5 arrays – yup same thing. The most critical case for this to occur is a high performance application where data is spread over many hundreds of spindles to get the performance needed. The chances are you’ll have regular backups and even offsite replication to protect such critical workloads.
I’m sure I’ve missed a few more, I need to read back through various posts as I’m sure I’ve seen a few ‘artistic license’ comments out there 🙂
The interop support details for SVC and XIV are available on our SVC support pages : http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#XIV
It’s worth noting that when running XIV behind SVC, we don’t support the thin provisioning or copy services functions down at the XIV level – this is for the same reason we don’t in general support these functions on other controllers.
Copy services are simple to explain, as only SVC knows the allocation of a vdisk to physical storage, so copy services need to be above the storage pooling laye. Similarily it would require a flush of the SVC cache in conjunction with the controller cache to ensure the data on disk was a consistent image. The alternative is to go for something like Invista’s approach and provide volume level virtualization with no cache. This doesn’t get the best out of virtualization however, as you can’t stripe for performance, you are limited to the performance of the single volume. Nor can you perform sub-volume level migrations, hot-spot management, enhance performance with caching etc.
Thin provisioning behind SVC is an interesting one. Because we split the capacity into extents, and we stripe over these extents across multiple volumes, we expect there to be capacity. If you happen to run out of space on the thin provisioned disks (missing the alerts etc) then you risk taking a lot of vdisks offline. Since SVC provides Space-efficient Vdisks anyway, its not an issue.
Anyway, I know we’ve had lots of interest from existing SVC customers regarding XIV support, and a number prospective SVC and XIV sales are in the pipeline.
I’ll cover some more details of the DS5000 and its future-proofing interface technology, the jump in performance and the increased drive support over the next couple of days.