So to round off the week, the final thing we announced last month was a refresh to the SVC node’s themselves.
SVC has been providing our clients with true heterogeneous storage virtualization in the form of an appliance for over 16 years now. Depending on how you count them, these are the 9th generation, and 10th/11th physical node engines we have releases. And yes we still have some clients that have non-disruptively upgraded their clusters from the original 4F2 hardware in 2003 all the way to today – same ‘logical’ system but whole new ‘physical’ system – something Andy and I refer to as the “Triggers Broom” principle…
While things have definitely moved on in the storage landscape the key value propositions of SVC still remain. The ability to manage a large number of heterogeneous storage controllers under one common umbrella interface and set of advanced functions and of course the killer app, non disruptive storage migrations. Moving data from one controller to another without impact to ongoing I/O.
The new SV2 and SA2 nodes can be swapped into a running cluster and exchanged for older nodes, just as has always been possible. Therefore SVC is one of the few truly non-disruptive data migration tools available at the storage layer. Clearly OS level cloning and virtual machine storage movement can be achieved with some OS but in large enterprise clients, where there are distinct server and storage teams, such projects can become complex and require careful project management and co-operation between all involved.
SV2 and SA2?
Why two node models you may be asking. Well this is similar to what we did back in around 2007 with an entry and high-end node structure (8G4 & 8A4 back then)
The SV2 is the natural follow on to the current SV1 hardware. SV1 is still an extremely capable node engine and we do not plan to withdraw the SV1 hardware during the lifetime of the new nodes – so infact you have the choice of 3 different nodes.
When SV1 was introduced, we determined that the Real-time Compression functions required a fair amount of CPU horsepower, and actually (unlike most storage functions) the CPU clock speed had a close relationship to the meta-data throughput capabilities. We were working on supporting many more cores in the boxes but at initial GA the clock speed was king. For this reason we selected one of the high end Intel Xeon SCU’s – with a > 3GHz clock. One of the fastest clock speeds we’d ever put in an SVC. This however comes at a price, literally, and the jump in list price between the outgoing DH8 hardware and incoming SV1 was substantial. The feedback from end users was clear, and so we have re-introduced a lower entry cost node in the form of the SA2 this time round.
Hardware Spec Comparison
|CPU||8 core 3.2GHz (x2) Intel Broadwell EP||8 core 2.3Ghz (x2) Intel Cascade Lake||16 core 2.1Ghz (x2) Intel Cascade Lake|
|Memory (min/max)||64/256 GiB||128/768 GiB||128/768 GiB|
|Boot drives||2x External||2x Internal||2x Internal|
|Battery||2x External||1x Internal||1x Internal|
|Max Fibre Channel||16x 16Gbit||12x 32Gbit||12x 32Gbit|
|Compression Assist||2x optional cards||1x Internal (equivalent to SV1)||1x Internal (2.5x SV1)|
With each generation of Intel core design, we typically see an uplift around 15% capability, having gone from Broadwell EP to Cascade Lake (skipping Skylake) this means that the SA2 is an approximate replacement for the SV1 in capability terms, with the SV2 being a signification leap, with double the core count.
The cache sizes have dramatically increased with a maximum of 1.5TiB per I/O Group. This also means the increase in internal memory bandwidth is substantials.
Compression / Data Reduction Capabilities
Its worth commenting on the compression assist hardware. With the SV1 nodes you could add up to two cards per node. These were the first gen of hardware assist, based on a chip provided by Intel. The new nodes have the net generation chip which itself comes in different SCU’s – with the SV2 having a higher SCU (more throughput) than SA2. For SA2 and SV2 these are integrated internal to the node hardware and do not take up any PCIe slots.
Roughly : 2x SV1 Cards == SA2 == ( SV2 / 2.5 )
In other words the SA2 is equivalent to 2x SV1 cards, and the SV2 is equivalent to 5x SV1 cards – in terms of maximum bandwidth capability.
Software wise, to drive the compression assist cards, the new hardware only supports Data Reduction Pools. That is, the older Real-time Compression based on the RACE algorithms are not supported on the SA2/SV2 hardware. This is one of the main reasons we have not withdrawn the SV1 hardware, for those of you wishing to continue using RTC volumes, then SV1 is the answer.
NOTE: However, that being said, the future is Data Reduction Pools, and the SV1 will likely be the last SVC platform that will support RTC volumes, and so you may want to think about a migration project over the next year or so to allow you to upgrade to whatever comes next!
NOTE: Thanks to Lee McEvoy for also pointing out that DRP is included in the base license costs, and so once moving from RTC to DRP you can drop the next renewal of the RTC license.
32 Gbit Fibre Channel
We discussed supporting 32Gbit on the SV1 hardware and decided against it. SV1 has 7 PCIe slots, however these are all laid out as 8 lane Gen3 slots. Fine for 16Gbit, where we can have full bi-directional bandwidth for up to 4 ports. For 32Gbit this would have meant only supporting a 2 port card with the same overall bandwidth. More ports are better, and so SV1 will not be supporting 32Gbit Fibre Channel.
If you require 32Gbit, then SA2/SV2 are your answer. However, as the new nodes only have 3 PCIe HIC slots, the total number of ports each node can support is reduced to 12. (The SV1 can support up to 16 ports)
This gives us another key reason to continue to sell the SV1 hardware, if you have already made use of all 16 ports, dedicating some of them for inter node, replication etc – we did not want to have to try and force you back to only 12.
Leave a Reply