ORIGINALLY POSTED 6th November 2007
7,8,72 views on developerworks
With the GA of IBM SAN Volume Controller (SVC) v4.2.1 only a couple of weeks away, its been the usual last minute rush to get the last few code changes benchmarked on my performance stand. Luckily the set of scripts and tools I inherited, and have modified over the years, allow 24×7 benchmarking, including code upgrades and downgrades so today was spent analysing the output of the weekends runs. However, a few weeks back I mentioned a deeper dive into the details of the feature enhancements in v4.2.1 as this release does cover some very useful FlashCopy and usability enhancements, along with a couple of performance related changes to the Cache algorithms that have proved to be very fruitful when measuring real-life workloads across varying tiers of storage.
FlashCopy and Replication Enhancements
The bulk of the work associated with this release has revolved around the enhancements to the FlashCopy feature set. While basic FlashCopy functions have been available since the initial v1.1.0 (June03) release of SVC, v4.2.0 (June07) added multiple targets (up to 16) for a single source virtual disk. This was a pre-requisite feature needed by both Incremental and Cascaded FlashCopy as is being introduced in v4.2.1. These features are fairly self-explainitory, but I’ve included some pictures here to help discuss how they work and how they can be used.
a vanilla FlashCopy mapping, in order to create an independent copy on
the target virtual disk, all data present on the source virtual disk has
to be copied to the target virtual disk (using the background copy
function). Subsequent triggering of the FlashCopy mapping meant all the
data had to be copied again in order to create an independent copy. With
an Incremental FlashCopy mapping, any subsequent triggering of the
mapping only has to copy the data that has been changed on the source or
the target since the previous copy completed.|
This enhancement is designed to allow completion of point-in-time online backups much more quickly, thus : The impact of using FlashCopy is reduced. Can substantially reduce the time required to recreate an independent image. Can enable more frequent backups so improving RPO – for example nightly incremental backups. More frequent backups could be used as a form of “near-CDP” – for example every hour, assuming the back-end storage can cope with the additional workload generated during normal working hours. The quanity of data required to be copied also depends on the ‘grain’ size. See Configurable Grain Size for more details.
The detailed stuff : Up to 16 incremental and non-incremental targets can exist for the same source. Consistency Groups can contain both incremental and non-incremental mappings. Specified via a new -incremental option on the svctask mkfcmap command and via the GUI Create Mapping panel. Mapping views now contain a ‘difference’ attribute that denotes the proportion of data that has changed since the last copy was taken.
Configurable FlashCopy Grain Size
FlashCopy operates on what are known as grains. Until now, grains were a fixed 256KB in size. When a write is destaged from the cache to an as yet un-split grain (not yet copied to the target) the grain is read, written to the target and the write data merged into the grain on the source. At this point the grain is classed as ‘split’ and the bitmap controlling the mapping is updated accordingly. SVC 4.2.1 now allows you to specify a 64K grain size for each mapping definition.
The grain size can affect the quantity of data that needs copying – the smaller the grain size, the less data there may be to copy. Thus, further reducing the potential time taken to complete an Incremental copy.
The detailed stuff :
- This smaller grain size is not restricted to incremental mappings.
- The “default” setting for the grain size can vary depending on the type of mapping being created. If a mapping is being created which will be added to a tree of cascaded and/or multiple target mappings, the mapping will use the grain size used by the other mappings in the tree.
- The cascading of mappings provides the possibility of joining two separate trees of mappings together. If these two trees do not use the same grains size then it will not be possible to create the mapping which will join them.
User Configurable Copy Services Space
Prior to v4.2.1 SVC limited the FlashCopy capacity to 40TB of source data and the total Replication (MM and GM, source and destination) capacity to 40TB (per I/O group). These limits have been increased to cater for disk capacity, and cluster addressability increases as well as providing users the flexibility to dynamically configure the amount of storage supported for each type of copy service to suit their needs. This enables a greater amount of customer data to participate in SVC copy service activities.
The detailed stuff :
- SVC v4.2.1 provides the capability to configure up to 256TB of copy services capacity per I/O Group.
- Copy service space is allocated between Replication (Metro/Global Mirror) and FlashCopy in a user-defined ratio
- Users can configure up to 256TB for FlashCopy or Replication
- Allocation is dynamically configurable, allowing users to redirect portions of the SVC cache to use for copy services space
- Default as previous releases; that is, up to 40 TB for each of FlashCopy and Replication, with no reduction in cache space.
- The table below shows how much additional copy service capacity is generated per 1MB of cache space :
Copy Service Grain size 1 MB of memory for specified IO group gives… Metro Mirror/Global Mirror 256KB 2TB of total MM/GM vdisk capacity for specified IO group Flash Copy 256KB 2TB of total source FC vdisk capacity for specified IO group Flash Copy 64KB 512GB of total source FC vdisk capacity for specified IO group Incremental Flash Copy 256KB 1TB of total source FC vdisk capacity for specified IO group Incremental Flash Copy 64KB 256GB of total source FC vdisk capacity for specified IO group
Over the next day or two I will continue the discussion focusing on the Cache related enhancments. Comments, questions welcomed as always.