NVMe Flash-Core-Module value for all. Introducing the next gen IBM Storwize V5000 and Spectrum Virtualize 8.3.0

ORIGINALLY POSTED 3rd April 2019

It’s almost scary that we are already 1/4 of the way through 2019… where does the time go. I’ve been even more remiss than usual on posting… I guess you’ve come to expect that now, but I have excuses, with multiple trips in Australia, Singapore, Malaysia and Switzerland in the last few weeks, as well as a much needed break over the summer months to recharge.

Over the next few weeks the travel doesn’t relent, I am over in the UK for the Virtualize User Group sessions running next week, the TechU in Atlanta at the end of April, an internal storage education week in Washington in May followed by the Nordic User Groups in Denmark, Sweden and Finland. If you can attend any of these events I can highly recommend them. I have provided some registration links/contact information here:

Bookmark this page – all the up coming User Groups around the world.

IBM TechU Atlanta – 29th April to 3rd May

Introducing the Storwize V5000E and V5100

Today we have announced the next generation Storwize V5000 platforms. Which will most likely become known colloquially as Gen3. The Gen3 range contains 8 new MTM (Machine Type Model) based on 3 hardware models, V5010E, V5030E and V5100. The V5100 comes in two models, a hybrid (HDD and Flash) and an AFA (All Flash) V5100F. Of these 4 types, each is available with 1 year or 3 years based warranty.

The V5000E models are based on the Gen2 hardware, with various enhancements, including more memory options on the V5010E.

The V5100 models are all new hardware and bring same NVMe Flash Core Modules that are available on the V7000 and FlashSystem9100 products, completing the transition of the Storwize family to all NVMe arrays, If you haven’t seen or heard about our unique FCM technology, check back to my posts from last year discussing the unmatched capacity, efficiency and performance that only IBM offers with inbuilt hardware compression and encryption capability inside each NVMe FCM,. The products also support industry standard NVMe Flash drives in standard capacity increments.

The all flash (F) variants of the V5000 can also attach SAS expansions to extend capacity using SAS based Flash drives to allow expansion up to 1520 drives. The E variants also allow attachment of SAS 2.5” and 3.5” HDD drives, with the V5010E expandable to 392 drives and the others up to 1520.

Inbuilt host attachment comes in the form of 10GbE ports for iSCSI workloads, with optional 16Gbit FibreChannel (SCSI or FC-NVMe) as well as additional 10GbE or 25GbE iSCSI. The V5100 models can also use the iSER protocol over the 25GbE ports for clustering capability, with plans to support NVMeF over Ethernet issued as a statement of direction for all 25GbE port types across the whole Spectrum Virtualize family of products.

Cache memory wise, the V5000E products are expandable up to 64GB per controller (IO Group) and the V5100 can support up to 576GB per controller.

Check out Lloyd’s post over on Tony’s blog for further information, and of course the full spec sheets are available on the product pages.

Tony also covers all the other Storage announcements from today.

Spectrum Virtualize Software V8.3.0

The new V5000 hardware comes supplied with V8.3 software, which will also be available in June across the rest of the Virtualize family. So what does that bring…

IP Quorum Preferred Site

When running a clustered solution, particularly an HA solution we support either disk based quorum devices (at a 3rd site) or an application that runs an IP attached quorum device. This has been available for a few years now, reducing the need for Fibre Channel attached storage at a 3rd, or tie-break site. Last year we enhanced the application to include all of the quorum device details – so T3 recovery data. With V8.3 we also add the much requested ‘preferred site’ capability.

When running an HA environment, we find clients often have a preferred site, that is, running more in an active-passive mode. Where applications primarily run at one site, with the second site being used in the event of application, storage or site failures. In this case, when an even split of the cluster occurs (i.e. the loss of inter-site link communication) but both sites are still ‘online’ – it is preferable for what was the active site to continue and the passive site be halted until communication is restored. The is now possible with the V8.3 software.

The preferred site can be configured via a chsystem command, for example

chsystem –quorummode preferred –quorumsite 2

In such a configuration, the nodes at site2 will be given preference when a tie-break is requested, essentially they will have a 3 second head star over the other site in requesting the tie break lock on the IP quorum.

Winner Site

One further enhancement to general clustering support is the addition of the concept of a ‘winner site’. This can be used when there is no configured 3rd site quorum device.. Again this is simple to configure :

chsystem –quorummode winner –quorumsite 2

Here, if you have two controllers and define site2 as the ‘winner’ then in a split, where the quorum 3rd site would usually be contacted, the nodes defined as site2 will continue – by default the site with the lowest node id would have continued which may not always be what you wanted.

Enhanced FC-NVMe Support with ANA

Last year we introduced NVMe over Fibre Channel support on the existing 16Gbit HBA’s across the Virtualize family. There were some restrictions, such as a host HBA port could only be configured as either a SCSI port or an NVMe port. With V8.3 a single host initiator port can now run both SCSI and NVMe connections to the storage.

ANA – or Asymmetric Namespace Access is a part of the FC-NVMe protocol standard that has recently been ratified and. Essentially provides the same ‘preferred path’ nature that ALUA did for SCSI. This is now supported and has allowed us to qualify and support stretched cluster (HA) solutions using FC-NVMe.

HyperSwap Local Read

Until now, if a host accesses a volume through the secondary site/volume then I/O would be forwarded to the primary site for completion. This could add extra latency to the I/O with an additional round trip across the inter-site link, particularly for read I/O,. (With writes, data has to be mirrored to both sites anyway)

With V8.3, when the local volume copy is up-to-date – i.e. in-sync, then even if the volume copy resides at the secondary site, and the host has requested the read via the secondary site, the read request will be processed locally by the secondary site. This will help to reduce latency in situations where either an application has moved to the secondary site or your application is a true clustered HA capable application.

Host Status Improvements

Degraded’ just what does that mean? Well often we have to rely on the behavior of host HBA ports to understand if everything is healthy. In some cases, host ports will logout after some period of inactivity, which we then see as a path no longer logged in – and may report the host status a degraded.

While this is ‘complete’ and ’precise’, it may actually be perfectly normal, and can cause confusion. With V8.3 we have added in a new status_policy setting. This can either be set to the old “complete” reporting style, this behaves as before, with the current ‘degraded’ state when ports logout, or the new policy “redundant” In this mode as long as a sufficient number of host ports are logged in to a sufficient number of node ports then the status will be ‘online’ Only if there is a loss in redundancy, i.e. a potential single point of failure is spotted in terms of host port paths, will the status be reported as ‘degraded’.

NOTE: Upon upgrade to the new V8.3 code, the policy will be set to the new “redundant” setting – i.e. the new behavior – something you need to be aware of.

A further new policy has been added : status_site. The two settings here can be “all” or “local”. If set to “local”, then only the nodes that are defined to be in the same site as the host will be used to determine the host status. This policy is only used when the system topology is set to ‘stretched’ or ‘hyperswap’.

NOTE: Upon upgrade to the new V8./3 code, the policy will be set to “all” – so the behavior is the same as before the upgrade.

To make use of the new “local” setting you must run the following command for all hosts you wish to change :

chhost -statussite local <host_id/name>

The final change in this area that you may spot is in the lsfabric output where you can see a ‘blocked’ status. This indicates that the given path is masked out – that is, the storage port is not masked to allow host access, or by virtue of the ports fctargetportmode.

Spectrum Virtualize for Publc Cloud – AWS. Support

This has already been long post I know, makes up for a few months of no posts, but rather than double the length here, lets just note that we now can run SVC software on Amazon Cloud against their local storage, and Chris will be along soon with a detailed post on Tony’s blog. [Link will be live in the next day or so]

Let me know what you think about the updates, hopefully something for everyone here and I hope to see you at one of the up coming events soon,.

One response to “NVMe Flash-Core-Module value for all. Introducing the next gen IBM Storwize V5000 and Spectrum Virtualize 8.3.0”

  1. Hello Barry

    Could you consider up the limit Flashcopy memory (2GB) We lose a job for that. Customer want to 7 days retention and 1 hour basis. 32x32TB volumes. So flashcopy memory isn’t enough for that. Also they don’t want to copy data management. They want to all snapshots in same storage system. Because they want to high speed instant recovery.

    Memory is low before but right now memory limits are huge. For example V5100 576 GB, Flashsystem 1.5TB

    Thank you.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: