ORIGINALLY POSTED 3rd November 2015
26,123 views on developerworks
Its that time again, another six months since our last software update and we bring you version 7.6 of the Spectrum Virtualize software. Applicable to all SVC and Storwize systems currently supported out there, but not the end of life for support of 8G4 and 8A4 systems in the RFA details issued last month.
The abstraction of function is one of the great advantages of the SVC platform over competitors products, either those with limited built in virtualization capability like those from HDS and HP, or those from, I guess Dell now, that claim to virtualize storage, but do nothing more than redirect I/O and can’t provide said abstraction of function, relying instead on the underlying storage capabilities. This function abstraction is what we refer to as one of the key benefits that storage virtualization can provide. Why? Because it lets you build truly heterogeneous storage environments, while maintaining common management, simplification of the environment, performance acceleration (via caching, flash exploitation, scale out clustering and hardware acceleration where needed) and critically a platform by which we can deploy common advanced storage functions across the entire heterogeneous envioronment, from entry level storage to enterprise class storage, while reducing complexity.
Introducing 7.6 Functionality
Encryption for Virtualized Storage
Lets prove this via an example. 7.6 brings ‘Software Encryption for data at rest’ capability to the Spectrum Virtualize software. By simply upgrading to the new level of code, the latest hardware platforms can make use of some more of those clever Intel offload instructions in the CPU to provide encryption of virtualized storage. By using these offload capabilities, there is virtually no performance overhead, and the system can maintain the same throughput levels as previous software versions. (Oh and by the way, we are seeing some 10% boost in performance levels due to some more code tuning in 7.6!)
Just like the current Gen2 V7000, we don’t require any Self Encrypting Drives (SEDs) – so you can solve your compliance and security requirements on ANY of the 250+ support storage controller models that you can virtualize behind the SVC and Storwize products. At a recent User Group, Ian Shave referred to this as allowing the storage admins ‘to be a hero’ and provide an answer to your companies encryption requirements with no added hardware.
There is a one off license per node to enable encryption, and this is purely to ensure we comply with export regulations wrt encryption capabilities being exported to certain geographies. This is a minimal cost license, but it is the first time SVC makes use of a key based license which must be applied to all nodes in a cluster where you wish to encrypt the back end storage.
We have had to limit the feature to the current generation of Storwize V7000 (Gen2) and SVC (DH8) hardware, due to the use of the offload instructions where are present only in the latest generation Intel CPUs. Thus to enable encryption on an SVC cluster for example, all nodes must be DH8 nodes.
The encryption is enabled at a pool level, so you do need to create a new storage pool, and apply the encryption key to that pool in order to enable encryption. You can do this using the recently added ‘child pool’ capability. So if you have spare capacity in a storage pool, create a child pool, apply the the key to the child pool, and migrate volumes from the parent to child. Similarly, create a new pool, and migrate volumes into the new encrypted pool.
Software and Hardware encryption can be used in conjunction with the Gen2 Storwize V7000. That is, if you are mixing storage pools with internal RAID encrypted drives/flash drives and externally virtualized storage, apply the key to the pool and it will only apply the software encryption to the external storage, letting the SAS hardware separately encrypt the internal storage.
Something else to note, if you have back end storage that is already encrypted, you can also intermix this with internal storage, or notify the SVC/Storwize not to double encrypt the data by telling the system that an ‘mdisk’ is already encrypted when you add it to a pool.
In summary, encryption is now possible using internal SAS attached storage, external virtualized storage, and external already encrypted storage, and any combination thereof that you wish to use! Check that any other vendors have this capability if you are looking for a virtualization solution that can not only provide all the features described in my opening paragraphs, but also compression with encryption and all the other necessary features built into the Spectrum Virtualize software.
Distributed RAID is a new type of RAID support that we are adding to all Storwize platforms. As the name suggests this allows much larger sets of disks to be used to create wide striped RAID-5 and RAID-6 arrays, while solving the rebuild problem that traditional RAID arrays are suffering from with the larger and larger spindle capacities, such at 6TB or the latest 8TB drives.
I will post details of the distribution patterns (while not giving away some clever IP we have generated) but you can think of this as a large set, but still having a traditional 8+P+Q (or whatever you choose) layout distributed across the larger set of disks. (Typically 60 drives per set would be a default size). The most complex part of the algorithm is the distributed sparing, meaning that the spare capacity is also distributed amongst the set.
With distributed spares come the major advantages. With the spare capacity being spread over the whole set (rather than traditional single drive spares) so every drive is contributing to the rebuild operations when needed. That is, every drive in the set is being read from, and written to during the rebuild, allowing typical rebuild times of 2-4 hours on average – rather than some 36+ hours in worst case examples with traditional single spare drives.
The other advantage is that you no longer have spare drives sat doing nothing most of the time. Since the spare drive is now spare capacity within the set, so you gain the performance available from the spares. So in a system with say 4 spare drives today, by using these drives within the set, you gain 4x single drive performance for the system to use in everyday operations!
This is particularly advantages with SSD or Flash drives, where you may today have only a small number of SSD in the system, so configured as 2+P with spare -> now you can create a 2+P+S distributed array, which means you gain the performance of the SSD in daily operations, potentially adding 33% more performance to the array!
Since these arrays are a completely different layout, and we don’t currently support the expansion of a distributed array, you do need to create brand new arrays with candidate drives as the building blocks. That is, there is no conversion from traditional RAID to Distributed RAID. So new capacity, or free drives are required to start using DRAID, and the subsequent migration at a volume level into the new pool.
Best practice would to be maintain the usual rules with storage pools, and only mix same capability arrays in a single pool. So create a new pool for your DRAID and only add the same type of DRAID to a single pool. i.e. 60 drives, with 8+P+Q and 2 spares etc. You can have up to 4 spares per DRAID and up to 128 drives are supported in a single DRAID. Each IO Group can support up to 12 DRAID.
HyperSwap GUI Integration
HyperSwap is one of the most talked about features we release back in June. Every customer wants 24x7x365 availability for their systems, and HyperSwap is a great way to provide site failure tolerance for HA environments. At 7.5 the configuration and setup of HyperSwap was somewhat manual, and required you to build the constituent objects before connecting them in a HyperSwap relationship. With 7.6 we have added an new ‘mkvolume’ command which will gradually replace the ‘mkvdisk’ command (all new functions and features will only be added to ‘mkvolume’ while ‘mkvdisk’ will not go away – for legacy support of existing scripts etc) 7.6 also provides full GUI integration for HyperSwap with the multi-site setup being depicted in the GUI system panels, as well as simplified configuration of HyperSwap volumes.
IP Quorum Support
When deploying Stretched Cluster, or new HyperSwap solutions, you are splitting the cluster across two sites, and the least reliable bit in the whole infrastructure becomes the long distance links between the sites. SVC clusters have always made use of Quorum devices, in order to make the tie-break decision in an even split of the cluster nodes. Up until now these quorum devices needed to be a Fibre Channel attached mdisk – or SAS attached disk. In order to support site failures with these HA solutions the requirement was to deploy a storage controller at a 3rd site, and make the quorum tie break device reside that this third site.
Not many people have a 3rd site that is Fibre Channel connected, and it was always an issue or compromise that had to be made.
7.6 adds the ability to deploy the tie break device as a Java application running on a physical, or virtual machine running somewhere that can act as a 3rd site. That is, somewhere independent of the two main HA sites where the SVC or Storwize systems reside. There are some requirements on the IP connectivity to this quorum device. For example, 80ms round trip latency maximum, and the application must be able to ping all of the node service IPs.
Up to 5 quorum applications can be deployed for a given cluster, with only one of those being the active quorum tie break device at any given time. The cluster will re-elect a new active quorum from the applications should the active device be lost.
Quad Port 16Gbit Fibre Channel HBA
You can now order a quad port 16Gbit HBA for use with the latest SVC DH8 and Storwize V7000 Gen2 platforms. This allows for up to 16x 16Gbit ports per DH8 node (or 128 in a single cluster!) and 16x 16Gbit ports per Storwize V7000 control enclosure (or 64 in a single cluster)
At the same time, we now allow up to 4x 8Gbit HBA per SVC DH8, so 16x 8Gbit ports, or any combination of 8Gbit and 16Gbit cards up to the maximum 4 per node.
Note this is a completely different HBA card from the current dual port 16Gbit card. So you can’t just enable the two empty slots in the current card, you would need to MES replace the HBA’s to migrate up to the new quad port cards.
As with any release, we roll up a large number of minor changes and enhancements, some of the more notable ones are :
- Installation of your authorised SSL certifications, with up to 2048 bit RSA, or 521 bit ECDSA keys – giving effective strength of up to 260 bit)
- The ability to set your own ‘Message of the Day’ (MotD) shown on all CLI and GUI logins
- Integrated Compresstimator – Part1 – CLI only. Ability to run compresstimator on the node hardware itself via a ‘one shot’ command for the whole system, or specifc volume. A view will be populated approximately five minutes later showing the results.
- Increase to 512 iSCSI IQN hosts per IO Group
- Service command to reboot an MIA canister in a storwize (instead of a physical reseat)
- Up to 23% reduction in individual node software upgrade time
- Up to 10% improvement in overall system throughput
So in summary, another bumper release with a couple of key new features, as well as rolling up a lot of the commonly requested features from our existing users. Its worth noting that at least 3 of the enhancements mentioned here were raised by customers, and re-iterated by lots of you at various User Group meetings worldwide this time last year. So if you haven’t already attended a User Group near you, then whats stopping you? Check out the posts on here for details of those I know about, and if you either arrange a User Group, or want to attend one, be sure to add a comment here and I will be in touch, or put you in touch with a local contact for your area.