DISCLAIMER – Please note that the RAID discussion, in terms of number of arrays matching cores is no longer relevant if you are using Distributed RAID (DRAID) – where a single DRAID array can use all the available cores. BW Dec 2019
For more updated DRAID information see ‘Configuring RAID/DRAID Best Practice’
ORIGINALLY POSTED 20th April 2012
TOP 5 POST 103,444 views on developerworks
I’ve been involved in a quite a few pre-sales, proof of concept and after-sales accounts where people have setup their V7000 and then wondered about the performance they are achieving, is it optimal, can they do better with what they purchased, or is it under-achieving.
I was once credited in an IBM Redbook, with the comment : many thanks to Barry “it depends” Whyte. [2019 note – and its stuck]
Thats because when it comes to all things related to storage performance, more often than not, “it depends”.
There are so many factors that contribute to getting the best performance out of a storage system, that it really does depend on what you are trying to achieve. Do you need every I/O to have the most minimal latency? Do you need the maximum I/O per second? Do you need the most MB or GB per second? What I/O size is your application sending? Is the workload random or sequential? Is the host optimally configured for the storage system? What striping is happening where? etc etc
In this series of posts, I plan to get back to my blogging roots and cover some of the key things to consider when it comes to storage, disk, and ssd performance. This set of posts discuss configuring IBM’s Storwize V7000 and SAN Volume Controller systems, and how to best setup your environment.
Storwize V7000 Array Configuration
At the heart of each V7000 controller canister is an Intel Jasper Forrest (Sandy Bridge) based quad core CPU. The most optimal way to maintain low latency when processing multi-threaded concurrent I/O is to avoid context switching as often as possible. You also want to ensure that coallesing of I/O processing, and control code is achieved as often as possible. Within the SVC softare that runs on the V7000, we have worked for many years (over 12 now) on making the code path and per core processing as consistent as possible. When we added the tried and trusted (SSA) DS8000 RAID functionality in 2010 (6.1.0) we therfore assigned RAID processing on a per mdisk basis to a single core. That means you need at least 4 arrays per V7000 to get maximal CPU core performance.
In other words, if you only run one array, you will get 1/4 of the performance of the system. Optimal configurations require at least 4 arrays, to ensure maximal core usage. Since arrays can be accessed through both nodes in the V7000 system, 4 is enough, on each node, one array will be assigned and processed through one core.
At the same time, you need to think about your application I/O requirements. Lets assume your application is performing truely random small block (<32KB) I/O reads and writes. Here your reads will come from just one disk in the array. Writes will result in one or more I/O to the disks within the array, depending on the RAID type chosen, see table 1.
The other thing to consider is the RAID strip size, by default V7000 will use 256KB RAID strips. If you have a large sequential workload, then you may want to look at your host I/O size. This will be covered later, but for now, lets consider < 256KB random I/O.
Read < Strip Size | Read > Strip Size | Write < Strip Size | Write > Strip Size | |
---|---|---|---|---|
RAID-0 | 1 read | 2+ read | 1 write | 2+ write |
RAID-1 or 10 | 1 read | 2+ read | 2 write | 4+ write |
RAID-5 | 1 read | 2+ read | 2 read, 2 write | 2+ read, 2+ write |
RAID-6 | 1 read | 2+ read | 3 read, 3 write | 3+ read, 3+ write |
Table 1. Number of disk I/O needed for a single host I/O
RAID-5 and 6 will give best capacity usage, and parity protection, but you can see the write penalty is 4x for RAID-5 or 6x for RAID-6 – so you need to consider this when creating your arrays.
Sequential Processing
When it comes to sequential reads and writes, here you can get the benefits of both capacity and bandwidth.
The additional I/Os on RAID-5 and 6 are due to the parity updates. When a RAID-5 array has to write to a strip, it needs to read the existing strip, read the existing parity strip, update the existing strip (read modify, write) and then calculate the new parity. If you can remove the need to read any existing data, you can half the disk I/O and/or data bandwidth requirements. i.e. you can calculate the new parity strip by using the new data strips without any reading from disk.
This is commonly referred to as “full stride writes” or “full strip writes” – essentially you are writing all new data across all disk that make up the array, and in memory, calculating the new parity data.
RAID-5 arrays are commonly called x+P – i.e. 8+P – this means 8 data disks, and one parity disk. You lose the parity disk’s worth of capacity with parity data. An array has a defined strip size. Lets say the strip is 256kb. To write a ful stride, we need to write to all data disks. In this case, 8 data disks, so 8x 256kb (2MB). If the host then issues a single 2MB write, we can write across all 8 strips, in memory calculate the parity and write that too. Now we only have to use 9x 256kb writes, and no reads. This massively changes (roughly 50% reduction) the work required from disk. See Table 2.
Write Random 256kb x8 | Write Seq 256kb x8 | |
---|---|---|
RAID-5 | 16 read, 16 write | 9 write |
Table 2. Number of disk I/O needed for 2MB host I/O – (Assume 8+P RAID-5 Array)
Part 1 – Summary
The RAID type does make a big difference to the number of I/O you can achieve. While NL-SAS or SATA disks may well show a cost saving when it comes to raw $/GB – remember NL-SAS give 1/3 the I/O per second per drive, but also require RAID-5 or RAID-6…
With V7000, always configure at least 4 arrays, and if possible, multiples of 4. Use a strip size and array width that matches you maximum host I/O size (especiallu if you are running batch sequnetial workloads)
In Part 2, I will discuss the performance differences between different drive types, NL-SAS, SAS 10K, 15K and SSD.
In Part 3, I will discuss storage pool configurations, mdisk groups in old SVC parlance
And finally in Part 4, I will discuss volume (vdisk) and host configurations.
Any questions, feel free to ask.
Leave a Reply