Virtual TechU 2020 – Follow up Questions

Hi Everyone,

The Virtual TechU is nearly here – and once again if you watch any of my sessions and want to ask follow up questions, please feel free to add a comment to this post so I can get back to you.

My main session is about how IBM has been enhancing the support experience – so I hope you find it interesting if you watch it.

Andrew

2 responses to “Virtual TechU 2020 – Follow up Questions”

  1. Hello Andrew,
    One of the sessions was also about Best practice with FlashCore Modules, compression (DRP) and losing sight of capacity planning (through the drain).
    I didn’t quite get that. Could you please explain that again?

    Thanks

    Like

    1. Hi, This probably deserves its own blog post…. but I’ll try and do it quickly here.

      DRP deliberately leaves as much reclaimable capacity in the pool as it can, because the longer you leave garbage collection, the more efficient garbage collection is (for this post – you’ll just have to trust me on that)

      Reclaimable capacity is data that you’ve deleted but DRP hasn’t cleaned up yet. For the rest of this post I’ll call it garbage.

      Lets do the rest of this with a worked example:
      You are storing 2 TB of data in the DRP
      There is an additional 4 TB of garbage

      So if you are doing compression in the FCMs – the FCMs are compressing 6 TB of data down to 5 TB – so the FCMs are achieving a compression ratio of 6:5 or 1.2:1

      The problem is that the compression ratio of 1.5:1 doesn’t tell you anything at all about how compressible your data is – so you can’t use it for doing any capacity planning.

      Why you ask (at least you did in my head)….

      Because that 1.2:1 could be :
      * 1:1 on your data and 1.5:1 on the garbage,
      * 2:1 on your data and 1:1 on the garbage
      * or anything in between

      Since you need to understand how compressible your actual data is to perform capacity planning – this leaves you with problems.

      So the technology will work – but you won’t be able to easily understand how much more data you can store in the pool before you run out of space.

      As a side note (which is actually more important than the main point here) – the performance “cost” of DRP is not the compression (unless you are using a 5030). Our hardware compression chip does this very fast. The cost comes in all of the metadata management needed by DRP – and you have to pay this cost even for thin provisioned DRP. Most people asking this question are normally thinking that DRP thin on top of FCM would probably run faster than DRP compression – but that’s basically not the case.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: