Home > Performance, RAID, Virtualisation > Data Storage for VDI – Part 4 – The impact of RAID on performance

Data Storage for VDI – Part 4 – The impact of RAID on performance

As I said at the end of my previous blog post

The read and write cache in traditional modular arrays are too small to make any significant difference to the read and write efficiencies of the underlying RAID configuration in VDI deployments

The good thing is that this makes calculating the Overall I/O Efficiency Factor (IEF) for traditional RAID configurations pretty straightforward. The overall IEF will depend on the kind of RAID, and the mixture of reads and writes using the following formula

Overall IEF = (Read% * read IEF) + (Write% * write IEF).

To start, with RAID-5, a single front-end  write IOP requires 4 back-end IOPs, giving a write IEF of 25%. If you had 28 * 15K spindles in a RAID-5 configuration, this means you can only sustain 235 * 28 * 25% = 1645 IOPS at 20ms.

Using Rubens or a 30:70 VDI steady state read:write workload the Overall IEF for RAID-5 would be

(30 * 100%) + (70 * 25%) = 47.5%.

For a 50:50 workload, the Overall IEF would be

(50 * 100%) + (50 * 25%) = 75%

For RAID-10 you sacrifice half of your capacity, but instead of there being 4 IOPS for every 1 front end write there are 2 for an write IEF of 50%.  The write coalescing caching tricks also add benefit to RAID-10 but again, not sufficiently to make any significant effect.

So how about RAID-6, with RAID-6, every front end write I/O  requires 6 IOPS at the back end or an uncached  Write IOE about 17% and a cached Write IOE of about 27%. Reads for non-NetApp RAID-6 based on Reed-Solomon algorithms are yet again, unaffected.

So, what about RAID-DP ? Well, much as I hate to say it, even though it is a form or RAID-6, by itself it has the worst of performance of all the RAID schemes (and yes I do still work for NetApp).

Why ? Because RAID-DP, like RAID-4 uses dedicated parity disks. Given that, by default, one disk in every 8 is dedicated to parity and can’t be used for data reads, both RAID-4, and RAID-DP immediately take a 13% hit on reads. In addition, just like RAID-6 every front end random write IOP can require up to 6 write IOPS at the back end This would mean that NetApp has the same write performance as RAID-6 and 13% worse read performance.

This gives the following results  for overall IEF for the 30:70 read:write usecase

(30 * 87%) + (70 * 17%) = 38.40   (!!)

This is exactly the kind of reasoning our competitors use when explaining our technology to others.

So why would NetApp be insane enough to make RAID-DP the default configuration? How have we succeeded so well in the market place ? Shouldn’t there be a tidal wave of unhappy NetApp customers demanding their money back?

Well there are a few reasons we use RAID-DP as the default configuration for all NetApp arrays. The first is that dedicated parity drives makes RAID reconstructs fast with minimal performance impact. It also makes it trivially easy to add disks to RAID groups non-disruptively. “This might be great for availability, but what about performance ?” I hear you ask. Well I’ve been told that you can mathematically prove that the RAID-DP algorithms are the most efficient possible way of doing dual parity RAID, frankly the math is beyond me, but the CPU consumption by the RAID layer is really minimal. The real magic however happens because RAID-DP is always combined with WAFL.

This isnt a good place to explain everything I know about WAFL, and others have already done it better that I probably can (cf Kostadis’ Blog), but I’ll outline the salient benefits from a performance point in the next post Data Storage for VDI – Part 5 – RAID-DP + WAFL The ultimate write accelerator

About these ads
  1. Tom Millar
    July 22, 2010 at 12:05 am | #1

    This is a very misleading post. Raid 4 and Raid DP on Netapp only have a 1 to 2 write penalty because the parity calculation is done in memory. (Read existing data into memory => add any changes => calculate parity + checksums => write stripe). According to NetApps white paper on Raid DP, Raid DP only degrades performance by 2 to 3 percent (See WP_3298), rather than 50% like many other Raid 6 implementations. Lookup tetrises and Consistentancy points.

    Raid DP / RAID 4 on NetApp have the same performance characteristics as Raid 0 / 1 but much better storage utilisation and slightly better fault tolerance, hence NetApp uses DP by default because there are many advantages and almost no negatives.

    I suggest in future you RTFM

    Tom (NetApp Instructor)

    • July 22, 2010 at 10:56 am | #2

      I’m pretty familair with the contents of http://media.netapp.com/documents/wp_3298.pdf, and I’m sorry you feel this is misleading. Originally the first 7 parts of this blog post were one single post of over 6000 words, and I was advised to partition it to improve readability. I decided to split the post here to add a bit a controversy and give some motivation to read the next post in the series.

      The point I tried to make in this post is if you eliminate the effect of write caching when performing a single write to a block within the RAID group, then RAID-DP looks like it’s the worse than Reed-Solomon based RAID-6 simply because of needing two dedicated parity drives, which as I mentioned in the post “This is exactly the kind of reasoning our competitors use when explaining our technology to others”.

      RAID-DP is a wonderful implementation of RAID-6, however it is the added benefits of WAFL’s Tetris I/O that makes all the difference. Saying that RAID-DP in and of itself is more efficienct than RAID-10 from a performance point of view is a simple message, but not completely correct, and lacks a certian amount of believability. It is really RAID-DP + WAFL that allows better (much better – check part 7) performance than RAID-10. This is why I go into fairly deep detail in the next post. I dont use the terms Tetris I/O in that post, though that is what I’m describing, but write combining gets a fair amount of coverage. I’d be interested in your critical feedback there.


  2. July 19, 2011 at 6:56 am | #3

    Hi John,

    Drawn back into this post, by the latest comment. The issue I have with this post, is that just as you say, competitors use these kinds of arguments, and now they can link this post written by an employee.

    The only implementation of RAID DP is by NetApp in WAFL. It is therefore impossible to do RAID DP without, WAFL, therefore the argument is invalid, and just confuses the issue.

    The fact is RAID DP with WAFL generates 2 IOPs per write (like RAID 0 / 1 ), but doesn’t use 50% of the disk for redundant data, and will only loose data if three disks fail in the same RAID group, unlike RAID 1 which can fail if one disk fails in each of a set of mirrored data.

    This is something I bring up in all my training courses, RAID DP rocks, it is the best implementation of RAID you can buy. Obviously, it is WAFL that makes it rock, but as the two are effectively the same thing, it doesn’t need to be overstated.


  3. Tony
    May 16, 2013 at 7:38 pm | #4

    Hey guys.
    As an Techie who is familiar with both EMC and NetAPP, frankly I’m thrilled with this post and it’s subsequent posts. John – Yes, it did make me read on and yes I did understand it all (thanks for not dumbing it down).
    Tom – I take your point, and you are obviously very pro RAID-DP (and as well you should be), but frankly – I prefer John’s honesty and ‘introduce a bit of controversy’ approach. There is so much FUDD, Marketing, Misleading info these days with people saying ‘My Stuffs Great’ and so little honest and deeply technical explanation as to WHY it’s great. Frankly John, the article is a blinder – now all I need is an EMC guy who will be equally as honest and I’ll have the complete picture :).

  1. No trackbacks yet.

Leave a Reply - Comments Manually Moderated to Avoid Spammers

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 530 other followers

%d bloggers like this: