Configuring IBM Storwize and SVC for Optimal Performance – Part2

ORIGINALLY POSTED 17th August 2012

TOP 5 POST – 77,440 views on developerworks

Thanks to everyone who responded to my Part 1 post, I’m amazed at how many people must read what I have to say! Based on the number of you that have either spoken to me in person, or emailed. I’m glad to be of some use πŸ™‚

Anyway, part one seems to have thrown up quite a discussion. At the end of the day, RAID and of course the cache layers can do its best to make the IO that hits the disks as optimal as possible. However, in most cases the ultimate performance is going to be dictated by disks themselves.

I’m glad to say that FC-AL (Fibre Channel Arbitrated Loop) disks are pretty much dead. If any vendor tries to sell you a disk system using them, you need to think about what they are after – your money, or your money. I’ve discussed the disk attach technology at length with many customer over the last few years. I find it almost amusing, if it weren’t for the “I told you so” feeling. Back in the late 90’s when I first started at IBM, we were working on a technology called SSA – Serial Storage Architecture. The technology was a serial disk technology, with the concept of spatial reuse. You have to realise this was in the days before SAN’s, where direct attached disks were king. SSA RAID brought you the technology to attach 2 to 8 servers with the same set of disks. It was a loop technology, but as soon as an I/O had been sent from one link to the next (a link was a connected between server and disk, disk to next disk in the loop etc) you could send another. No need for loop arbitration. Spacial reuse.

It may seem like I’m reminiscing here, but the reality is that we’ve gone through a few years of FC-AL where a single disk has to arbitrate (get control) of the loop to perform I/O. What happens if that disk gets control, then goes AWOL. How long do you wait? How do you recover? I’ve discussed in the past how we spent many hours working out how we could make a disk system using FC-AL that was as reliable as what we already had with SSA.

Some ten years later, here we are with SAS (Serial Attached SCSI) – back to a serial disk technology – and that’s great. SAS is a tree topology, with inbuilt spacial reuse. Best of all, the disks themselves are leaves on the tree. So the controller is the trunk, expansion and cabling are branches, while the unreliable disk entities are the leaves on those branches. No longer can a single disk take down the whole network, you simply lose that leaf.

So how does this affect me you ask. Lets look at where we are now with SAS and Nearline-SAS.

Nearline SAS

Nearline as a term was originally used to describe tape, or in its true name “near-online”. The term has been abused recently to describe the slower spinning, higher capacity HDD (hard-disk-drive) technologies. You can think of NL-SAS as essentially SATA drives, but dual ported, running server grade firmware and connecting to a SAS infrastructure.

Consumer grade SATA drives are the lowest grade commodity storage. In some cases still running at 5.4K RPM, but high capacity. 3TB+ However, drive speed is directly proportional to access latency. When it comes to a disk head reaching the data you need, there are two things.

  1. Rotational latency. The time taken for the drive to rotate to the sector needed
  2. Seek latency. The time taken to seek to the track within the radius of the disk platter.

The drive speed directly contributes to the time take (latency) to reach the data.

The other issue with consumer grade SATA disks is the interface options. A SATA drive only had one interface, or port. So you can only access it through one attached interface. NL-SAS drives basically combine the expense, slower rotational speed, and high density, but add the dual ported SAS interface. That is, you have two interfaces to the drive, allowing two control units (servers, storage heads etc) to talk to the drive at once. Now you can lose one control unit, but still have access to the device, but make use of the high capacity provided by the disk.

Performance wise, we’d normally expect about 100-150 IOPs per 7.2K NL-SAS disk, and about 80-100MB/s.


These are the middle of the road drive when it comes to enterprise disks. However, we recently have tested 1.2TB 10K RPM 2.5″ drives. That’s a hell of a lot of reasonably fast spinning disk capacity. In a 2U enclosure, that’s almost 30TB spinning at 10K RPM. Or in a 42U rack, 600TB and 151,00 IOPs…

The main difference is that these are dual ported drives by definition and the disk platters and logic are aimed at enterprise systems. Firmware can contribute a big difference to the reliability and performance.

Performance wise, we’d normally expect about 200-300 IOPs per 10K SAS disk, and about 100-150MB/s.


Top of the range spinning rust. 15K RPM, think about your car engine, if you have a diesel, you’ll top out at 5-6000 RPM, a high performance petrol may make 7-8000 RPM – and of course the Wankel Rotary will keep going close to 10,000 RPM. But for a spinning platter, with the read and write heads managing to get and store your data with the medium spinning past at 15,0000 RPM – that’s motorbike or F1 car territory!

Today’s SFF (Small Form Factor) disks are reaching 600GB for a 15K RPM disk, but the performance is stunning (for HDDs).

Performance wise, we’d normally expect about 350-500 IOPs per 15K SAS disk, and about 125-175MB/s


Now throw everything I’ve said out the window. Solid State Drives are a new breed.

  • No rotation.
  • Truely random access
  • Built in LSA and write relocation

The latest disks we’ve been testing using eMLC and efficient algorithms are producing 50,000 IOPS – yes 100x a 15K RPM disk, and close to 500MB/s (2-3x) The bandwidth is close to saturating the 6Gbps drive interfaces – but 12Gbps SAS aint far away.

Consumer SSD

Its worth noting that there is a big different between Enterprise class SSD that we offer in SVC, DS8000, V7000 and those ~$100 SSD drives you can buy for your laptop/desktop. It really is a case of you get what you pay for. Most consumer SSD drives have little or now over-provisioing and much reduced wear-levelling / endurance capabilites. Performance too.

Performance wise, we’d normally expect a consumer SSD to provide ~5,000 iops but still about 3-400MB/s

Performance Comparison

Drive TypeSpeed RPMCapacitiesIOPsMB/sComments
Nearline SAS7.2K RPM1TB-3TB100-15080-100High densiy – lowest performance
SAS10K RPM300GB-1.2TB200-300100-150Average denisty – midrange performance
SAS15K RPM146GB-600GB300-500125-175Lowest denisty – highest performance
Consumer SSDn/a32GB-1TB1000-5000100-500All density – good performance – reliability? warranty 1year+???
Enterprise SSDn/a146GB-400GB10,000-50,000100-500All density – high performance – guaranteed reliabilty 3-5year

The catch 22….

High capacity, low performance, high protection = even lower performance

One last point to consider, the lowest reliability drives, 7.2K NL-SAS high capacity, need the highest level of protection, RAID-6 which in itself has the lowest performance (6 ops per write) as per my part1 discussion… and of course the longest rebuild times… but maybe soon we can do something about dual parity raid rebuild times… See also Configuring IBM Storwize V7000 and SVC for Optimal Performance Part 1

One response to “Configuring IBM Storwize and SVC for Optimal Performance – Part2”

  1. […] best practice information, particular to best performance see the following 4 part work : Part 1 | Part2 | Parts 3 & […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: