ORIGINALLY POSTED 14th June 2012
25,166 views on developerworks
On Friday we will release SVC software version 6.4 – it used to be simple, I could just reference SVC, but of course the same software package can be installed on V7000 systems. At present V7000 Unified systems remain at 6.3 features and functions.
You’ve probably seen the news in the press, and IBM RFA’s regarding 6.4 that was announce on the 4th June, but for many people, and those of you that have been to briefings, User Group sessions and general roadmap discussions, I can at last feel pride in that fact we have FINALLY relased what has been referred to as Non-distruptive Vdisk Move (NDVM), Volume Move between IO Groups, or even Multi-node Volume Access. Those of you that attended last years Mainz User Group, and this years Hursley User Group, I promised didn’t I!
The new volume move between IO Groups is a huge feature for those existing users of SVC, and for V7000 now that we support up to 4 V7000 control enclosures clustered as one system. This means you can load balance volumes between nodes in the cluster without disrution to I/O. It also means that other vendors that used FUD to class the clustering in SVC/V7000 as a lose couling of paired boxes can’t use that FUD anymore. A single volume can be accessed through all nodes in the cluster if you want. We’ve added the concepts of access vs. caching IO groups. You can access a volume at any node port, cache can still be 2-way write back, and the caching IO group can be move as you see fit. I’ve found that a common deplyment model of SVC lends itself to needing this feature. You buy some nodes, deploy volumes and gradually use up the resources on that node pair. You now add another node pair, but need to distribte the load across both IO groups. You always could, but it needed some quisece time, now you can load balance and move volumes in the same way that you have always been able to move the physical storage at the back end systems. This, if you like, virtualizes the virtual!
6.4 also brings Fibre Channel over Ethernet (FCoE) support to both product lines. FCoE is still in its infancy, but I would guess that 16Gbit native Fibre Channel will be the last speed jump we see in mass production use. After that SAN’s and ethernet networks will finally converge with 40Gbit and beyond. The various FCF requirements on the CEE FCoE infrastructure mean that today we are supporting FCoE host attachment, I’ve yet to see any mainstream disk controllers that are FCoE only. So disk and fabric attach is still via native Fibre Channel (for SVC) of course with V7000 you can use FCoE only, but need an FC network to cluster multiple control enclosures together. While I’m talking about clustering, V7000 now supports up to 4 control enclosures clustered as a single system, not only management wise, but also with NDVM you now have true 8 way volume access. With a max config V7000 that means 960x 2.5″ drives, or 480x 3.5″ drives, giving a single system 1.44PB of storage!
The FCoE support is, like most SVC features, a software upgrade. Those of you that have SVC with the 10Gbit features, or V7000 Model 300 controllers will gain FCoE support as soon as you upgrade to 6.4. Model 100 V7000 systems can be non distruptively upgraded to Model 300, and SVC CG8 nodes can be upgraded (non-disruptively) to include 10Gbit FCoE or 10Gbit iSCSI support. The same 10Gbit ports are iSCSI and FCoE capable. Performance wise, the FCoE ports compare (transport speed wise) with the native Fibre Channel ports (8Gbit vs 10Gbit) and recent enhancements to the iSCSI support means we see similar performance levels with iSCSI as with Fibre Channel. In V7000 10Gbit iSCSI is more than capable os saturating a maximum configuration HDD configuration.
The final, and probably key, feature in 6.4 is the embedded Real-time Compression (RTC) function. This operates as a thin provisioned volume, with data being compressed in real time, BEFORE the I/O is written to disk. This isn’t a post processing feature, and doesn’t require multiple passes, its a single pass, additional caching layer in the software stack.
As this is my third post of the day (wau!) and I’ve already covered lots here, there is so much to discuss on RTC, that its worthy of my next few posts.
In the meantime, I’d recommend users read the new (draft) Redpaper discussion of RTC, how it works, how to plan etc
Yet again the SVC development team has brought some much needed features like the NDVM, but also is helping future proof your investments, FCoE – how about using SVC as a bridge between your new FCoE host infrastructures, and legacy FC SANs with a single management point – not to mention the RTC function providing 50% or more data reduction, to help with provide compression without compromise.
Next few posts will detail how the RTC code works, what it can do, and some of the proof points we’ve seen. I haven’t fogotten about my most popular post of late – Configuring V7000 and SVC for Optimal Performance Part1 and I do have parts 2 and 3 in draft…. coming soon.
Leave a Reply