Introducing Spectrum Virtualize 7.5.0 – HyperSwap, grep and more


16,848 views on developerwortks

Over two weeks ago, we released v7.5.0 of the recently re-branded ‘Spectrum Virtualize’ software, that software that we all know as the code running in both SVC and Storwize products.

The major new feature in 7.5.0 is HyperSwap – which you can think of as a new and improved Stretched Cluster solution, allowing I/O Groups to be placed at each site, rather than nodes at each site – which means we can now run HyperSwap configurations using Storwize hardware (V5000 and V7000 only)

While this is the main feature, and I’ll cover a bit more detail in a minute, two small items will likely make several of my regular readers/comment posters happy.

  1. You can now have up to 25 concurrent open ssh sessions per node.
  2. You can run a selection of Unix commands directly one the node, two of the most requested being grep and more – I can hear a few ‘woohoos’ from here – and I’m in Seoul, Korea this week!

Stretched Cluster

Back in 2007 we introduced the Vdisk, or Volume Mirroring feature, like LVM mirroring in the SAN itself. This was quickly adopted as a way to stretch the two nodes that make up a caching pair and provide HA site level protection. Commonly known as Split Cluster in the early days, we adopted the ‘Stretched Cluster’ term and officially supported configurations in 2008. Over the years, this feature has evolved, and in the 7.2 and 7.3 timeframe some of the side effects – such as the need to sometimes copy writes over the long distance twice were removed. So don’t listen to any of the FUD from a certain per-plexed competitor who tries to tell you otherwise – their ‘how to compete against SVC’ deck is so far out of date…

Lets looks first at what stretched cluster provided you.

An SVC is deployed as a pair of appliances, engines or nodes that make up a single I/O Group. Each volume has an affinity to an I/O group, and those two nodes handle the read and write caching for said volume. Both nodes present active paths to a volume, and we do present those paths with a preference. i.e. for internal load balancing on cache destage operations, only one node needs to do the destage, thus the preferred node can be used to load balance that work. We could also then use this to help configure server affinity to sites, by modifying the preferred node to match the site that was active for a given volume.

Really stretched cluster was Stretched I/O groups – yes the cluster spanned two sites, but the I/O group also spanned both sites. By doing this, you didn’t have to change the base volume behaviour, paths were presented for the volume to servers at both sites. One site would be advertised as preferred, both nodes cached writes, and since 7.3 we didn’t have to copy the data again at the Vdisk Mirroring Layer.

One downside of Stretching I/O Groups is that when a site does fail, the other half takes over, but now you don’t have a whole I/O group, just one node servicing those volumes, and a write through cache, since we can’t guarantee redundancy.

Workarounds for this are available, standby nodes, and manual intervention could get the system whole again, but to some people this was not good enough. So…


HyperSwap truly stretches the cluster, by placing a whole I/O group at each site, and presenting the same volume, in active active mode through TWO I/O Groups. Now we have FOUR nodes providing paths to a volume, therefore you can lose up to 3/4 of the system before you’d be down to a single node. i.e. 3 nodes can fail and I/O would continue – applications would continue. This is HA within HA!

Since 7.2 we have supported Enhanced Stretched Cluster, where you could tell the cluster which site each node, and disk system belonged to. With HyperSwap we have added this ‘site awareness’ and you can now define host site affinity too. This has a major advantage when combined with our active/active preferred pathing model. We can dynamically change how we present paths to volumes, based on the site affinity.

To explain, see the diagram below, where green paths are preferred, and red paths are non-preferred. You can see that the same volume is presented at both sites, but the preferred/non-preferred nature of the paths changes based on which site the nodes and hosts belong to. By doing this we can very simply (and without user configuration – other than defining the site information on day 1) ensure that host data does not end up crossing the long distance link – unless of course it needs to due to some site failures.


Those running Enhanced Stretched Cluster can also now make use of the host site affinity when upgrading to 7.5.0 software.

There is a roadmap for future enhancements to HyperSwap as you can imagine, but this is a great step forward in helping our customers to provide fault tolerant site disaster tolerant Highly Available solutions for todays ‘always on’ world.

Oh and incase you missed, because its I/O Group to I/O Group HA now, you can deploy HyperSwap solutions using Storwize V5000 and V7000 systems, no longer limiting this function to SVC systems.

Other 7.5.0 Enhancements

7.5.0 also brings along the usual currency and miscellaneous enhancements as listed here :

  • VMware v6 and vVol support
  • Microsoft ODX Support
  • 8TB NL-SAS LFF and 1.6TB SFF SSD
  • Enhanced fine-grained per volume QoS
  • Up to 2GB FlashCopy bitmap space (up from 512MB max)  (a few more woohoo’s for this one!)
  • Default volume creation with FastFormat enabled
  • 25 concurrent ssh sessions
  • And did I mention, a few Unix commands you can run directly on the nodes :

grep; more; sed; sort; cut; head; less; tail; uniq; tr; wc

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: