Hopefully by now you will have all seen the new updates across pretty much the whole Spectrum Virtualize Family.
The new 8.3.1 software GA’d a week early on the 29th February 2020, and 220.127.116.11 is available for download.
At the same time as I discussed in part 5 of my potted history partwork, the Storwize products are being replaced by FlashSystem products from the 5000 up to the 9000 series. We’ve also introduced two new SVC hardware platforms as well as refreshing the 7000 and 9000 product lines.
In this post I will concentrate on the software updates included in 8.3.1 and cover the new hardware over the subsequent few posts.
Spectrum Virtualize 8.3.1
At a high level, version 8.3.1 includes quite a few frequently requested major feature enhancements :
- DRAID Expansion – the ability to dynamically add more drives to an existing DRAID array
- Externally co-ordinated 3-site replication – the ability to have a near and far copy of your production data for added Disaster Recovery
- Support for a new top tier made out of Storage Class Memory (SCM) drives
- Statistical reporting for EasyTier embedded in the GUI
- The next round of Data Reduction Pool performance enhancements
- System level audit enhancements, with things like login history (failures as well as success) and exporting to external audit repositories
- SNMP support added for v3 – now supports both v2 and v3 protocols
- HyperSwap volume expansion – when the volume is using any form of thin provisioning (including data reduction)
Dynamic DRAID Expansion
This has probably been the most requested product enhancement for the last few years, it has been discussed at most user groups and events I attend and the feature is now available after upgrading to 8.3.1.
You can now add between 1 and 12 drives in a single operation. Important for SMB clients who only need to grow their existing capacity footprint by a small amount each time. Previously a whole new DRAID array would be required, which may have resulted in differing performance capabilities to the existing installation. Now you can simply expand an array and grow the existing capacity and performance (and purchase less than 6 drives at a time).
I will cover the ins and outs in another post as its worthy of a dedicated post.
Externally Co-ordinated 3-site Replication
The 3-site capability provides for co-ordinated disaster recovery across 3 sites, designated primary, auxiliary-near and auxiliary-far. The standard “star” and “cascade” topologies being supported. MetroMirror (synchronous) replication is supported between the primary and auxiliary-near sites, with a periodic asynchronous copy between either the primary (giving a star topology) or auxiliary-near (giving a cascaded topology) and the auxiliary-far.
The operation of the 3-sites is co-ordinated by an external ‘orchestrator’ application that is provided and runs on a RedHat system, most likely on a virtual machine. Again I will provide more information in a subsequent post.
The FlashSystem 9100 and Storwize V7000 Gen3 hardware was released with NVMe drive support and quoted as being “SCM ready”. Well now that SCM’s themselves are now production ready we have introduced support for the two leading technologies from Intel and Samsung.
Given that SCM’s provide ultra-low latency, even compared to todays SSD and FCMs, EasyTier treats SCM’s as a new top tier. Given the cost delta, and relatively low capacity per drive of SCM’s when compared to SSD and Flash, you can see we are back to where we were 10 years ago with Flash. That is, a small <5% investment in SCM can accelerate even an All-Flash array. Meaning the next epoch in terms of storage latency reduction is possible for a small investment in SCM drives. Those vendors that didn’t implement Tiering and have been telling you tiering isn’t needed with All-Flash arrays may need to re-think their strategies!
At the same time we have embedded some of the data you could extract from the systems and run through the Storage Tier Advisory Tool (STAT) directly into the end user GUI interface. This allows you to view tier compositions, tier movements and the overall workload skew per pool. A great way of checking if you have the correct division of capacity by tier and a tool to help justify the investment in SCM or more Flash in your environment.
You can access these new reports once upgrading to 8.3.1 :
A few notable enhancements (I look forward to seeing one of my long term clients when I’m back in the UK next who has been after the audit log stuff for years!)
Audit Logging Enhancements
The auditing of all svctask and satask commands can now be exported to your syslog server, by adding the new “-audit on” flag when creating or modifying your syslog server information (mk/chsyslogserver)
Similarliy, adding the “-login on” option will send login attempt information to the same syslog servers.
Traditionally the syslog information is sent via the stateless UDP protocol, a kind of fire and forget method. Now you can change this to use TCP and a specific port. These are again new options available on the mk and ch syslogserver commands.
Finally, more from a support and debug perspective, the full audit information is recorded locally on a file for IBM use should support require it – this is in addition to the binary tamper-proof auditing the systems have always done. Just making this more human readable for better support.
Volume Protection will stop any user from deleting a volume that has had any I/O in the last X minutes. X can be from 5 minutes to 24 hours. The -force flag does not change this behavior. The idea is to stop someone accidentally deleting a volume that is still in use.
Spectrum Virtualize has provided Volume delete Protection for some years now, but the default behavior has been for this to be disabled. (Primarily because it was a system level setting, applied to all volumes, which had a tendency to break things like Spectrum Copy Data Management, which needed to be able to delete old or temporary snapshot volumes)
NOTE: From 8.3.1 the setting is still enabled and disabled at a system level, but is also now a pool attribute, and so you can have some pools with protection enabled, and others disabled.
NOTE: From 8.3.1, when you install and initialise a new system it will have volume protection enabled by default.
NOTE: Upgrading to 8.3.1 from an older level will not change the current system setting. That is, if you had enabled volume protection, all pools will now be enabled, and vice versa.
mkrcrelationship has been enhance to check the target volume protection, i.e. it will fail if the target volume has volume protection enabled and has done I/O within the X time period.
SCSI LUN ID across IO Groups
When creating a volume that is mapped across multiple access I/O groups (most commonly for volume migration or in HyperSwap configurations) the command would fail if it could not map the volume to the same SCSI LUN ID across all I/O groups.
The applicable commands that create host mappings now have a new flag -allowmismatchedscsiids to override the failure and allow the command to complete.
One final minor tweak, when changing or creating access to a volume. The commands will now police and ensure that the caching I/O group is always in the list of access I/O groups.
Quite a lot there, hence why I didn’t want to cover all the new hardware and some of the features in depth… subscribe for notifications, and look out for more details in the coming days.