ORIGINALLY POSTED 7th May 2014
46,980 views on developerworks
I guess a few folks will comment that I’m raising my head again at announcement time, and true, but thats part of the problem on working on new and exciting stuff, I can’t talk about it until days like today where we have announced two major new engines in the SVC and Storwize family. I even had a mug shot taken with the new boxes – at short notice on a “typical” lab dress code – and unshaven (needing a haircut) day !
SVC DH8 Node
SVC has been transforming our clients infrastructures for almost 11 years now, and we’ve taken the next step forward today with the announcement of our next generation DH8 SVC node engines. This new node comes with the biggest hardware change methodology that SVC has seen in those 11 year. One of the key changes is the move from the 1U “pizza” box server base to a 2U base. However at the same time we have removed the need for the 1U rack mount UPS that came with each node and replaced this with two internal dual redundant batteries, thus no increase in rack space per node.
It was always the intention that SVC would have an internal battery, we even designed one 12 plus years ago, but after a few of the prototypes exhibited the tendancy to spontaneously combust, we looked for an alternative solution. Since that solution at the time required an entire rack U of battery its not suprising the ones we’d designed that were the size of a 3.5″ HDD were pushing the technology that little but too far. However in the intervening years, battery technology has come on leaps and bounds, and we can now cram enough cells into what takes up 4 2.5″ drive bays. These batteries take the form of a 3rd input supply to the node, so if both redundant power supplies fail and the box signals EPOW, then the batteries kick in to hold the server up for the needed memory dump to disk. Most customers I’ve discussed the removal of the UPS with have almost jumped for joy, one even told me that although we are claiming the same rack U space per node it will actually allow them to reclaim more rack space as they always kept nodes to a minimum to make sure the SVC to UPS cabling couldn’t get tangled up or confused.
The other key thing the 2U server gives us is real estate, not only on the motherboard but more importantly in PCIe slots, 6 in total. The server uses the latest generation dual socket 8 core Xeon CPU – based on the Ivy Bridge core design, and has all PCIe gen3 throughout. At present you can have 3 SAN or network “I/O cards” each giving 4 port FC or 4 port FCoE/IP ports. In addition up to two Intel based “Quick Assist” based accelerator cards that we use to offload the Real Time Compression workload. Finally for now, a brand new high throughput 12Gbit SAS disk interface card can be added to directly attach up to 48 SSDs per node pair. (Instead of the 8 you used to be able to attach) In addition these SSD are now provided in external enclosures, based on the new Storwize 12G SAS expansion enclosure. So all SSD are now dual ported, attached to both nodes and allows RAID-5 or RAID-6 to be utilised on SVC nodes also. Nodes come with 32GB cache, and an optional 32GB upgrade for use with RTC, with plenty of room for future expansion.
I’ll cover much more of the details in coming days, and be sure to come and say hi as I’m presenting this in full detail at IBM Edge – Session number sTU02 – repeated on Tuesday and Wednesday.
Final comment for now on SVC, with my performance hat on – the DH8 provides around 2x the IOPs and up to 3x the GB/s – watch out for some new SPC benchmarks to prove this soon.
Not content with one major new engine, how about a second. The Storwize V7000 was released in late 2010 and has been a huge success for IBM. The first ever organic modular storage controller, of course running the same enterprise SVC software stack, and has spawned the V3700 and V5000 products in the last couple of years. One of the things that todays V7000 was limited by was the non customer upgradable control canisters. We’ve fixed that and today also announce the next generation Storwize V7000 engines. These follow the same ethos as the new SVC engines, where flexibility, scalability and performance are key.
(To differentiate I will refer to the new V7000 engine as Gen2)
Compared to Gen1, the Gen2 control enclosure can attach more than 2x the disks, with up to 504 SFF drives attached to each control enclosure, thats 21 enclosures full of SFF drives, or an entire 42U rack’s worth. The system clusters up to 4 control enclosures also, and using LFF drives means up to 84 enclosures and 1056 drives, not to mention just shy of 4PB of raw capacity in the single system!
The scalability continues with up to 16 FC ports, 8 FCoE/10G ports and 6 1G ports per control enclosure. The base cache is also 32GB, with an additonal 32GB available for RTC. Speaking of RTC those same Intel Quick Assist cards are available here to, but only one as an option per canister- every control canister has one build into the motherboard already! The canisters are full height within the 2U enclosure so we could fit the 3 half height PCIe3 slots, as well as tthe same generation of Intel Xeon 8core CPU that can be found in the new SVC node. All upgrade options, I/O cards, compression assist cards and memory can be customer upgraded without the need for a canister replacement.
Funnily enough, as with the new SVC nodes, V7000 Gen2 is approximately twice the performance and more than 2.5x the GB/s pushing up to 40GB/s throughput from a clustered system – direct from disk (not cache hits!) The IOPs are driven mainly but the 2x the disks that can be attached, but there’s more left over.
Since SSD’s, especially at 12G SAS – oh yeah, all the expansion enclosures and drive slots are 12G capable – and the new 12G SSD we support are pushing up to 800MB/s per SSD. So to maximise SSD performance the control enclosure SAS slots have the same bandwidth and IOPs capability as the two expansion ports – essentially we’ve optimised the control enclosure for SSD usage – so general recommendation will be to place SSD in the control enclosure.
Other changes to the system design include the batteries in the control canisters themselves – instead of the power supplies as they were in Gen1, and a seperate fan assembly (instead of inside the power supply also) Guess what, I’ll be covering all this at IBM Edge also – session number sTU01
We have been busy the last year or so, while we still put out the software enhancements we talked about back in June and Novemeber last year!
Speaking of software we’ve been busy there too… version 7.3 will be available for existing SVC and Storwize products at GA in June.
Software Version 7.3.0
Easy Tier 3 now includes “any 2 or any 3 tier” support – so you can mix SSD, Enterprise and Nearline storage in your hybrid pool, with cold data being demoted to the Nearline, and hot data moving up to SSD as it always did. You can also mix SSD and one class of HDD, or Enterprise and Nearline HDD and Easy Tier will take care of the data movement as normal. No change or complexity is added, its still “Easy” and is simply turned on. Note that since 7.2 we’ve always enabled logging mode, so by default your system is logging the hot/cold data – but with ET3, you can now upload this data and get much more useful information on what benefit enabling data movement would give you, how much SSD / HDD you need and most importantly the skew of your data, so you can size and model any new or replacement systems accordingly – as well as tweaking your existing system with ease. Again I’ll be covering this is detail at IBM Edge the week of the 19th, in my session number sDS02.
Storage Pool Balancing provides you with an “always on” feature that makes sure all storage within a given pool is balanced, that is, every managed disk in the pool is giving you the same performance. This feature is always enabled and is included in the base product licenses. Where we had various tools available to balance pools before, this was based on a “count” of the number of extents and capacity. Now the same Easy Tier algorithms are used to monitor and balance the pool from a performance perspective. This can give dramatic improvements, we’ve seen test data show that a balanced pool can provide varying degrees of improvement, some as much as 3x with 1/3 the latency, just by letting the function learn and move the extents.
The final major feature change in 7.3 is a new cache design, where we have streamlined cache processing and enabled dramatic latency improvements on Storwize platforms when using some of the advanced functions that themselves, by their very function, generate more backend I/O. This is worthy of a series of posts in itself, so watch this space.
All in all, I hope you will excuse my lack of posting on here, but you can clearly see, we haven’t been twiddling our thumbs, nor sitting still – none of this would be possible without the amazing core SVC and Storwize development team I work with in the IBM Hursley and Manchester labs in the UK, and around the world in IBM China, India, Hungary, Germany and of course the US.