Home > Uncategorized > A very long reply

A very long reply


This blog post is essentially a very long comment reply to Darius Zheng at Oracle on his blog

https://blogs.oracle.com/si/entry/7420_spec_sfs_torches_netapp#comments

I suspect it is so long that it probably needs more formatting to be readable, so I’m posting it here too

—–BEGIN REPLY—-

Thanks for posting the reply, again, I think you’re missing my point

 DZ …  “Oracle has 2.5x Performance for 1/2 the cost of a Netapp”

 A more accurate statement is that “The Oracle 7420 attained a benchmark result that was 2.5x better for 1/2 the _List Price_ of a NetApp 3270 array in 2011 that had significantly less hardware.

 What this says is that Oracles List Price is a significantly lower than NetApp’s List Price. You could say the same thing about the difference between the price of a Hyundai i30 and an Audi A3.

 Secondarily, as I pointed out in previous comments the rest of the results also says that the Oracle solution make relatively inefficient usage of CPU and Memory when compared to a NetApp system that achieves similar performance.

 Yes the list price of the NetApp system is significantly higher than an equivalently performing 7420, but this is a marketing and pricing issue, not a technical one. In general I like to stick to technical merits, because pricing is a fickle thing that can be adjusted at the stroke of a pen, technology requires a lot more work to get things right.

In the end, how this list price differentiation translates into what these solutions will actually cost a customer is highly debatable. I do a LOT of research into street prices as part of my job,  and in general storage is increasingly purchased as part of an overall upgrade and this is where the issues get murky very quickly as margins are moved around various components within the infrastructure to subsidise discounting in other areas. Having said that, I will let you in on something, based on the data I have, for many quarters, in terms of average $/RAW TB paid by customers in my market, Oracle customers paid about 25% MORE for V7000 storage than paid by Netapp customers for storage on FAS32xx and that only recently did Oracle begin to reach pricing parity with NetApp.  We could argue the ways the analyst arrived at those figures, but from my analysis the trend is clear across almost all vendors and array families vis. The correlation between customer $/TB is strongly correlated with the implied manufacturing costs, and very poorly correlated with the vendor list prices. The main exceptions to this are new product introductions when there is a compelling new and unique value propositions (e.g. DataDomain) or when vendors buy business at very low or even negative margin in order to seed the market (e.g. XIV in the early days)

Now personally, I disagree with NetApp’s list pricing policy, however there are reasons why that list price is so much higher than the actual street price most people pay. Many of those reaons have to do with boring things like long term pricing contracts. If you’d like to turn this into a marketing discussion around pricing strategies, I’m cool with that, but I don’t think the people that read either of our blogs are overly interested. However I will say this again, the price people pay in the end, has more to do with the costs of manufacture, and a solution that gets more performance out of less hardware will generally cost the customer less, especially if the operational expenses are lower.

DZ .. “Why wouldn’t a customer want more CPU and Cache?”

Why would someone want less CPU or Cache ? … because it costs them less, either in street pricing terms, or in the cost of powering or cooling them. And yes, I believe that that a 7420 controller with more than eleven times as many CPU cores, and more than one hundred and sixty times as much DRAM will chew a lot more power and cooling than a 3270 controller.

It’s not just the cost of the electricity (carbon footprint and green ethics aside), its also the opportunity cost of using that power for something else. Data centers have finite resources for power and many (most) are very close to the point where you cant add more systems. In those environments, Power hungry systems that aren’t running business generating applications are not viewed kindly.

JM Interpretaiton of DZ … “Happy to do a power consumption comparison, where is the netapp information ?”

I’ve answered that in a simlar question to me on my blog at storagewithoutborders.com – See the blog-post URL in a previous comment re getting access to power consumption figures.

DZ .. You say the Netapp cache is SO efficient and you talk about an old non relevant 3160 SPEC SFS post

I referenced the “non relevant 3160 SPEC SFS post” because it is relevant, being the place where NetApp tested the same controller with a combination of flash acceleration, no flash acceleration, with both SATA, and FC/SAS spindles. The specific one I referenced was the most comparable configuration that includes flash and a 300GB 15K disks which as I pointed out achieved 1080 IOPS/15K Spindle with a cache that was 7.6% of the fileset size.

If you prefer I could have used the more recent (though still old) 6240 dual node config which uses 450GB 15K disks and achieved cache that achieved 662 IOPS per drive but with a cache that was a mere 4.5% of the file-set size, or the 24 Node 6240 config which achieved 875 IOPS per drive with a cache that was 7.6% of the file-set size. As you can see a modest amount of flash improves the IOPS/disk enormously, and there is a good correlation between more flash as a percentage of the working sets and better results in terms of IOPS/Disk. Before you ask, as far as I can tell, the main reason for the difference in IOPS/spindle between the 24 Nodes 6240 and the old 3160 with a similar cache size as a percentage of the fileset, is that NetApp’s scale-out benchmark used worst case paths to from the client to the data to provide a  squeaky clean implementation of SPEC’s uniform access rule.

DZ .. “You fail to mention that the 3270 gets a MEASLY 281 IOPS per drive and that the 3250 gets a whopping 300 IOPS per drive. So your point is that the 3250 was done to compare with the 3270? What was the 3270 done for?”

Neither the 3270, nor the 3250 benchmarks used flashcache, so the IOPS/spindle are going to be good, but not stellar. I don’t know exactly why we didn’t use flash in the old 3270 benchmark maybe its because SPEC-SFS is a better indication of CPU and metadata handling than it is about reads and writes to disk, and like I said, we’d already proved the effectiveness of our flash based caching with the series of 3160 benchmarks.

Going in to the future, I doubt NetApp will do another primary benchmark without flash, but its worth saying again, that the 3250 was done to show performance equivalency with the 3270, so that configuration was as close to identical as NetApp could, and that meant neither the 3270 or the 3250 benchmarks used Flash to improve the IOPS/disk. If NetApp had done it, I have every reason to believe that the results would have been in line with the 3160 and 6240 benchmarks referenced above.

DZ .. “I thought the purpose of a benchmark was to compare many vendors systems against each other with the workload remaining consistent?”

NetApp tends to use benchmarks as ways of demonstrating how much a their technology has improved against a previous NetApp baseline, to help their  customers make good purchasing decisions. Proving they’re better than someone else is not a primary consideration, though often that is a secondary effect. Oracle is free to use their benchmarks in any way they choose, personally I’d love to see a range of configurations from each technology bench-marked rather than just sweetspots, maybe opensfs and netmist will bring this about, but the fact is, running open, verifiable, and fairly comparable benchmarks is expensive and time consuming and I will probably never see enough good engineering data published. If you’ve got some ideas to simplify this, I’d love to work with you on this (seriously, we might compete against each other, but we both clearly care about this stuff, not many do)

DZ .. With that in mind the Oracle 7420 still crushes the netapp in price, efficiency and performance. I am guessing we are also still better or comparable in power usage as well.

You’ll see from the above that I respectfully disagree with pretty much everything in that last statement, and I’m looking forward to that controller power usage comparison :-)

—END REPLY—

Categories: Uncategorized
  1. April 12, 2013 at 5:26 am | #1

    Hi John,
    I took the liberty of doing some googling myself to get the power comparison done.

    The Oracle 7420 ZFS Storage Appliance as configured in the SPEC SFS 2008 test uses 9952 Watts at a 100% workload (meaning the CPU’s are pegged at 100%.
    This is publicly available at http://www.oracle.com/us/products/servers-storage/sun-power-calculators/calc/s7420-power-calculator-180618.html

    The Netapp 3250 (Less power usage then the 3270) uses 17866 Watts. This number was calculated from the netapp site requirements guide located here. http://support.netapp.com/NOW/public/knowledge/docs/hardware/NetApp/site/pdf/site.pdf
    Calculations used:
    2 x 3250 Power supplies @ 533 Watts each equals 1066 Watts for controllers (page 75 of the netapp doc)
    14 x DS4243 Disk shelves ea with 4 power supplies @ 1200 (600 for 2 of them max)equals 16800 watts!!! (page 117 of the netapp doc)

    So the Oracle 7420 is not only 2.5 times faster it also consumes a little more then HALF the power of the Netapp 3250

    • April 21, 2013 at 11:50 am | #2

      I must apologise for the delay on my side I’ve been busy with other really high priority stuff (18 months of unfiled expenses … my beloved had some stern words to me about that …) so my soc-med stuff has mostly been posting stuff I’d already written

      As to your assertions, it seems like you’re comparing a system accelerated by Flash vs a System composed of pure disk, which you should know is an misleading comparison when comparisons vs flash enabled systems are available.

      I’ll get the stuff I promised up soon, but given the way you’re using the figures, I’ll also need to put up a meta-analysis of power efficiency based on published disk efficiencies with an without flash acceleration.

      To make things a little easier, can we separate out the power draw of the controllers on the assumption that the state of the art is that we’ll both get similar disk efficiencies from flash enablement (trust me, I’m being generous to Oracle with that assumption).

      Regards
      John

      • April 23, 2013 at 2:51 am | #3

        John,
        This is really quite comical. You keep changing your story or SPIN… Now your saying its not fair, because the ZFSSA has flash? Seriously? The hybrid storage pool is a key piece to the Oracle ZFS Storage Appliance performance and not just the flash but the huge amounts of DRAM Cache which is magnitudes faster then any flash technology out there. The point is that even if netapp add’s a few flash cache drives to your 3250 or 3270 SPEC benchmarks, netapp will still be less then half as fast. The ZFSSA config in the benchmark has a measly 4TB of read SSD and a meager 292 of Write SSD. Lets see netapp do something similar and get more then double the performance? The bottom line is that to run the same workload based on the the netapp as the ZFSSA you would need 2.5 x the netapp disk and controllers. If you can add a few PAM cards to a FAS3250 and then get 267,000 OPS on the SPEC SFS then lets see it? Why hasn’t Netapp ran a benchmark like that since 2009, if Netapp’s flash-as-cache technology (PAM) is so great and so superior to Oracle’s. Seem’s to me they may not want to post the results as there may be something to hide? Maybe not, just maybe netapp is as fast or faster and maybe they can do it with less power? But as of today and the current SPEC SFS benchmarks they are pathetically slower per IOPS, use more power, and cost significantly more.

        In regards to Oracle using netapp for storage internal. Most of it was migrated long ago and the consolidation Oracle IT gets with ZFSSA controllers is a giant win. They are saving on all the above discussion. They save power/space and get more work done per controller then netapp. See these links.

        http://medianetwork.oracle.com/video/player/1862840156001

        There are other advantages to running Oracle DB’s on ZFSSA. There are features in the Database today such as Hybrid Columnar Compression which ONLY work with Oracle storage and they can make a tremendous difference in database performance. http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-hybrid-columnar-compression-1689701.html

        This is only one feature. There is much more to come. Stay tuned.

  2. Nikolay
    April 12, 2013 at 6:30 pm | #4

    John,

    It’s really interesting to read the discussion around Oracle 7420 system vs NetApp comparison. I’d like to thank you for leaving such sharp posts in the public area. And while I’m fully on your side since I’m NetApp follower and work for a NetApp partner, one aspect is really bothers me. Leaving the upcoming power consumption debate between you and Darius aside, I was really wondered (since I first got familiar with NetApp world) why are NetApp systems have such relatively small amount of RAM? I mean for example low mid-range FAS3210 system has only 8GB. I’m absolutely sure this is done not in an effort do cut the overall system cost or due to any kind of technical misconception but as you wrote in one of the replies to Darius post: NetApp systems usually achieve impressive IOPS per driver values with relatively small cache capacity with respect to the capacity of the fileset. It would be extremely interesting for me to get some points which explain why the increase of system RAM will not greatly increase overall system response time and IOPS numbers and why NetApp has chosen the way of flash cache instead of RAM caсhe especially taking into account modern low prices for RAM.

    And one more thing. As you wrote “Finally, If you’re really interested in how ONTAP’s CSMP CPU allocation algorithm works, and how that lets us use multiple CPU cores so efficiently, let me know and I’ll write up a blog” – yes, it would be really interesting to read such a post or any related TRs around this topic (if any publicly available) (:

    Thanks a lot!
    Nikolay

    • April 21, 2013 at 11:38 am | #5

      The decisions around exactly what goes into a NetApp array in terms of hardware are based on a number of factors.

      The first is making sure there are enough resources to hit a particular performance expectation across a variety of workloads, for both current and future versions of ONTAP.

      Another is making sure we fit within certain sets of environmental considerations including rackspace consumption, power-draw, heat generation, noise generation, and also whether there is sufficient power and cooling available after failure events such as FAN/PSU failures. Bigger RAM and more CPU’s needs more cooling and power which means bigger power supplies more cooling fans etc. All of these factor into subsequent costs,and physical package sizes in ways that may not be obvious. People outside of the military rarely ask “how long can a system continue to operate in a datacenter running at 60 degreees celcius when only half the power rails are working”, but that is the kind of thing we design for, and that means being frugal with things like CPU and Memory.

      Sometimes simply adding more RAM will result in a significantly better result, especially in workloads with large working sets like SPC-1, but wont have much if any effect in other workloads where CPU tends to be the bottleneck such as SPEC-SFS, the design trad-offs are discussed extensively by experts with decades of experience, and the configurations often change during the design process as a result of internal benchmarking. Deciding exactly what goes into a controller is a balancing act across CPU, NVRAM, cache sizes for metadata, cache sizes for uncommitted data and a whlole bunch of other stuff like cooling capacity and power draw. I’ts also worth noting that in almost every case in the “real world”, the number of spindles dictates the actual performance threshold, not CPU, memory or interconnects. Sometimes I wish the industry would benchmark with what we think the customers are likely to buy /deploy instead of theoretical maximums .. NetApp is pretty good with that compared to other vendors, but even so, the spindle counts on the benchmarks we publish are usually higher than I see on that class of controller in the customers I talk to.

      To summarise, FAS systems are built to a standard, and using only those resources needed to hit that standard with reasonable headroom. Other systems seem to be built with a “Lets throw in as much CPU and Memory as we can and see what we can pull out of it”. As to “why don’t we add more memory?” a trite answer might be “because we don’t have to”, though its actually a bit more complex than that :-)

      I’ll try to tease out the stuff I can disclose around out CSMP (Course Symmetric Multi Processor) architecture and put up a blog for you as soon as I can.

      Regards
      John Martin

  1. No trackbacks yet.

Leave a Reply - Comments Manually Moderated to Avoid Spammers

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 530 other followers

%d bloggers like this: