ORIGINALLY PUBLISHED 3rd September 2007
10,601 Views on developerworks
Talk to any storage or system administrator and one of their major pain points (after data movement pains) is making A work with B and keeping C at the right firmware level so that A works with C as well as B and so on. Sometimes this just isn’t possible and can lead to infrastructure ‘exceptions’ which complicate the environment even further. Over at RupturedMonkey you can see one such issue, and an understandable vent of frustration!
It is a difficult problem for anyone to solve, especially when vendor A doesn’t talk nicely with vendor B, even more so when vendor C doesn’t want to even open dialogue with vendor B. Customers and users are probably in the best position to put pressure on vendors to make things happen, especially in situations like vendor C. However at the end of the day, it doesn’t help you get your job done.
Wouldn’t it be great if you could at least reduce one or two of the interop requirements and matrices. Most forms of Storage virtualization can make your life easier, due to the abstraction layer that is introduced. Again the scope and flexibility introduced does depend on where the virtualization is done.
Lets look at an example of a user who wants to map luns from a DMX, DS8000 and DS4000 to the same AIX host. To enable full multi-pathing support you need PowerPath, SDD and RDAC installed on the host. In some cases, these software layers will not even co-exist. You need to be especially careful which hdisks are mapped through which ports to ensure you use the correct software. It gets worse, the HBA’s in the host may not all support the same levels of firmware / device driver with these storage controllers and even the fabric switch will place requirements on everything! Its no wonder that we all get into a mode of ’if it aint broke don’t fix it. Sometimes you have no choice, you stumble across a problem that’s ’fixed in the latest level’ and you need to upgrade… but what does that do for everything else in you environment?!
In-band appliance based
So how can an appliance help? Lets stick to SVC as its what I know well. With SVC the primary multi-pathing software, supplied free gratis it should be noted, is IBM SSD (Subsystem Device Driver) – the same multi-pathing software required for direct attach DS8000 systems. The beauty with virtual disks exported by SVC is that they are the same no matter what Operating System is using them. That is, you need the same multi-pathing software, SDD, on each host. You no longer have to worry about lots of different vendors software co-existing, you just need SDD. Now of course, you may not want to use SDD, and SVC does support the major OS native MPIO solutions, and the other key players like Veritas.
There are of course still requirements on the firmware and device driver levels for the HBAs but these are now common across all luns, because they are all coming from SVC. One common “controller” as it may be viewed.
Not only that but at the same time you no longer need to worry about the implications of upgrading your storage controller firmware levels. As long as its supported by SVC the interactions and co-existence stops there.
This does of course now mean that your gain is our pain. The vast support matrix means SVC having to handle all host and all storage interactions and we have to test them all, continually!. The SVC test team is now spread around the globe, San Jose, Tucson, Hursley, Vacs (Hungary) and Shanghai. Each of these teams becoming experts in certain platforms, OS, software and controller functions. Each major release of SVC code adds 2 or 3 major controllers and 1 or 2 host operating systems. The numbers are quite staggering with SVC now supporting 12 major operating systems “families” and their associated clustering and native multi-pathing. Over 50 HBA models, all major Fabric vendors and more than 80 models of storage controller – and rising. Of course if your hardware or software requirements are not met you can use the RPQ process to gain support, with last year the RPQ team providing support for many hundreds of individual RPQ requests. The rule is that any RPQ made during a release cycle becomes fully supported for all customers at the next major release of the code.
Controller based
While
the same benefits and drawbacks are possible with controller based
solutions, its unlikely you are going to virtualized everything behind a
single controller. Hu and Claus
seem to imply that the use of USP’s external controllers is primarily
recommended for backup and archive solutions, and I’ve outlined my
concerns over the ludicrous claims of 296PB
of external attach (I did post this reply on both Hu and Claus blog and
have been ‘ignored’ or should I say ‘moderated’ out of the picture).
So I’ve quizzed Hu about the marketing figure of 296PB of external attach. He’s neglected to post my comments, hopefully you will. So lets assume all of that 296PB is 750GB SATA drives that can handle about 80 IOPS each. Scaling that up, thats over 31 MILLION IOPs… even if they are only 25% busy thats almost 8 MILLION IOPs. Now just how can you claim this level of support with a system that istelf can only do 3 MILLION read cache hits – especially when the cache isn’t supported on the external attach…. ? Is this marketing figure really something thats worth mentioning in practice ??? Further I understand there is a limit of about 12K IOPS per port, so that would need 660 ports to attach and make use of the 25%… or 2640 ports for 100% use… What would you say is a REALISTIC external attach capacity?
USP does not providing any caching ability and has limited port IOPs to your external controllers so in the example above, there’s no way you’d want to put your DMX and DS8000 behind it. So you will still have other controllers on the SAN and directly attached to hosts and still have the same interop issues.
Switch based – split-path
In general because these devices actually present the virtual disk as a lun provided by the virtualzer again you can gain the advantage of reduced multi-pathing co-existence and are again at the mercy of the support provided by the device vendor. I have tried to investigate Invista, but the details on the EMC website are sparse to say the least. It does use EMC’s PowerPath to provide enterprise level multi-pathing. PowerPath has an associated annual licensing charge – which in some comparable cases costs more than a complete entry level SVC system.
While also searching for supported platforms and storage controllers I could only find this on 3rd party sites for the 6 host platform families and nowhere could I find the list of supported controllers. Despite downloading the 20MB EMCSupportMatrix as directed in the technical specification brochure, a search through the 3719 pages could only find one reference to Invista – in the trademarks page… Combined with the stealth release of v2 and lack of push from EMC, you could be forgiven for thinking that Invista was not high on EMC’s priority list. I have been subsequently informed that there is a separate Invista support matrix, but I can’t find it publicly available.
In conclusion
When it comes to interoperability, you are at the mercy of your virtualization device vendor, SVC does have a 2+ year head-start on the competition in this respect, and appliances and switches can gain the most benefit from common multi-pathing and device support. Controller based devices need to be able to support the bandwidth requirements of ALL tiers of storage to be able to truly make the same claim.
Leave a Reply