ORIGINALLY POSTED 27th February 2008
9,308 views on developerworks
Been struggling to find the time over the last few weeks to pay you all as much attention as I should 😉 Two weeks ago I had a flying visit to Austin, Texas for an internal Summit. In attendance were Distinguished Engineers, Senior Technical folks and other specialists in the topic at hand, covering IBM’s Systems and Technology Group (STG – which storage is a part of), Software Group (SWG) and Research. It was good to see and hear everyone’s perspectives on the topic and I guess one of the major advantages of being part of a ‘supermarket’ IT company (as Tony would call us) against being a ‘speciality shop’ which deals in just one of the cornerstones of todays IT infrastructure. I was there to present on a new project proposal we are working on and some general storage strategy work we’ve been looking at. It was great to get the overall perspective from the whole corporation and it made me think about a couple of things we hadn’t previously considered.
Meanwhile back at the office we are busy getting the next set of SVC features, functions and enhancements ready for System Test which should be out in the field later this year. As always there’s more we could be doing than we can do, and everyone is flat out keeping the cogs turning.
I just upgraded my home PC and been tweaking, boosting and tuning to get the absolute stable maximum out of it – maybe thats why I’ve not been on here for a while… that new Radeon is too good to put Half-Life2 down… yes I know Nvidia are generally faster, but the price of the HD3850 was too good to be true and there is much to be said by brand loyalty (AMD and ATI here I’m afraid!)
Anyway, this one is over to you all, fire away with whatever questions you may have and would like to discuss and I’ll see what I can do. Here is a ‘starter for 10’ that came in from a reader that wished to remain anonymous.
“If I go with a full virtual environment, servers and storage, are there any storage pitfalls I need to think about?”
So this one is one of those – it depends – answers. One major consideration is the actual requirements of all the host applications. So although products like VMware are great for Windows applications, where each application only uses a small amount of the CPU power available, they all need some form of storage performance to work at an optimal level. There is no substitute for spindle count, or at least raw I/O performance. You do need to still scale your storage environment to cater for the peaks in your application workload. While storage virtualization can allow you to squeeze that bit more out of what you have, the physical devices still have a limit, and if you go over that limit you do risk impacting all the hosts that share the same devices. the number one gotcha would be to try and squeeze a bucketful of I/O from a thimbleful of hardware. Planning, planning planning.
Hope that helped a little. Its a topic that probably deserves an entire redbook!