How Multicore Everything (MCx) Makes FAST Suite Even Better

My colleague, Jason Nash, wrote a really excellent blog post on the recently announced VNX2 line from EMC. I heartily recommend you go read that post first (if you haven’t already) and then come back here. My post is going to have a narrower focus than Jason’s, in part because it’s what I had always intended to do and (now) in part because there’s simply no point in doing a general overview now that Jason nailed his.

What I want to discuss in this post is how the new Block OE (MCx) and the features it presents and enables makes the FAST Suite even more of a game-changer than it already is.

If I’ve had the privilege of presenting/whiteboarding with you around the VNX product line, you know that one of the first things I talk about at length is the FAST Suite. It is, in my opinion, the key differentiator between EMC and everyone else in the mid-tier array market. Product release schedules aside, disks are disks, DAE’s are DAE’s, CPUs are CPUs no matter what badge is on the name plate. The difference is software, and the FAST Suite is the absolute best array-based software package I’ve seen and MCx kicks it up a significant notch.

MC What?

Before diving into how MCx kicks the FAST Suite into a new gear, let’s be sure we’re straight on what it is. MCx stands for “Multicore Everything” and it is a from-the-ground-up re-write of the EMC Block Operating Environment on the VNX. Here’s a pretty picture that shows what the former OE looked like, and what MCx’s introduction does to it.

FLARE vs. MCx

Notice here just two things for now. First, note that FAST Cache is now a part of the OE in MCx where it sat above the OE in FLARE, and also that the OE is no longer monolithic. This, as Jason pointed out (you did go read his post, right?), will allow for among other things better scaling.

MCx Features and the FAST Suite

FAST VP

By now, most of us understand what FAST VP is. From a business value perspective, it is the “Lower your TCO” feature of the FAST Suite (FAST Cache being the “Go Faster!” component). FAST VP is EMC’s implementation of Fully Automated Storage Tiering (FAST), which has been around for years now. This feature was created to address the lifecycle/access pattern of data that exists in an organization (and thus on a shared storage platform).

The simple truth that many still miss is that LUNs are not monolithic from the perspective of the data. Some data/blocks/slices within the LUN will be “hot” (accessed frequently/regularly) and some data/blocks/slices will be “cold” (stale, accessed infrequently) and this temperature reading changes over time. FAST VP accounts for this by viewing LUNs not as a whole, but as individual slices making up the whole.

An example will make this plainer. Take a 100GB LUN that contains (for our purposes) a 100GB SQL Database. Without any storage tiering, we must make the determination of what tier of storage to land this LUN on. FAST VP eliminates the risk of aiming low (and killing performance) or aiming high (and killing your TCO).

On the original VNX with FLARE, FAST VP would view our SQL LUN as 100 slices of 1GB each. That’s pretty good (ask any of our customers and they’ll tell you). If we have a three-tier pool (EFD/SAS/NL-SAS), FAST VP can then apply the policy we select to spread those 100 slices between the tiers, and then at defined intervals (8 hours by default), relocate those slices to the appropriate tier (if necessary) based on observed access patterns. This ensures the most valuable data to your business (that is, the most frequently accessed data) lives on the fastest tier of storage available to you LUN, and the least valuable data (that is, the least frequently accessed data) lives on the lowest tier of storage.

On the new VNX with MCx, FAST VP would view this same 100GB LUN as 400 slices at 256MB per slice. We see marketing details run amok all the time (no one hates “Marchitecture” more than me), but this is just basic math. That’s a 4X improvement over the previous FAST VP granularity all because the underlying OE has the ability to track more slices due to the processing power of multiple cores. So now, with MCx’s FAST VP granularity, we are able to hand our customers a 4X improvement in their confidence that their data is sitting on the appropriate tier of storage that they’ve invested in. This is a very, very good thing.

FAST Cache

FAST Cache may not be the greatest thing since sliced bread, but in the mid-tier storage space at least, it’s pretty darn close. Applications thrive on one thing: lowest possible transaction response time. The quicker you get me that read or write IO, the quicker I can move on to the next one, and so on. To do this, we want to service as many reads and writes out of our array cache as we can (or if we want to get really fancy, do it in cache at the compute side; but that’s a topic for another day). The problem here is obvious once stated: there is a finite amount of cache (DRAM) that can be placed on an SP (Storage/Service Processor), and it’s dead expensive. So you typically get a very conservative amount in the mid-tier, and then you’re done. This might be, say, 8-32GB of DRAM (or higher) depending on the size of the array.

FAST Cache solves this problem by acting as an extension of the array’s DRAM. And, unique in the mid-tier, it does that for both reads and writes. The importance of those italics cannot be overstated. Reads accelerated in cache are wonderful, but writes accelerated in cache? Game-changer.

In FLARE, FAST Cache worked (and works) very, very well. Blocks (64KB in size) are eligible for promotion to the FAST Cache tier (backed by EFDs) upon the 3rd request for that block. Once there, it is active in FAST Cache until it ages out (Least Recently Used is the algorithm).

In MCx, FAST Cache introduces a temporary suspension of the “3 hits to promote” rule during the Cache Warming phase. On initial creation of the FAST Cache space, blocks are eligible for promotion immediately (and on the first request) until such time as the FAST Cache utilization reaches 80%. Once there, MCF (Multicore FAST Cache) returns to the “3 hits to promote” rule. This is valuable to the customer because they will see the benefits of FAST Cache immediately. No more waiting for that warming period to complete before you get the benefits of FAST Cache. Enable it, use it. All thanks to the increased power of MCx. Here’s another pretty picture describing this new feature.

FAST Cache Warming

In addition to the warmup functionality, the biggest improvement I see in FAST Cache is the re-ordering of how a Write IO comes into the array. Before I explain the differences, let’s look at another picture.

FLARE vs. MCx - Host Write IO

In FLARE, to commit an incoming (write) IO from a host, we had to first land the IO in the FAST Cache Memory Map, then send it down to Cache. Only after these two steps would an acknowledgment be sent back to the host that the IO had been committed.

In MCx, we eliminate the extra step (and, thus, extra latency) by receiving the host IO directly into MCC (Multicore Cache), and then immediately acknowledging the IO back to the host. This in and of itself may not sound like a huge improvement (there is much more here, and I may detail more in a subsequent post), but when applications thrive on low latency, and an internal process can remove one of two steps in acknowledging an IO from an application/host, it qualifies as a welcome and significant improvement in performance (and, again, scaling).

FAST Cache, FAST VP, and Block Deduplication

Lastly (or finally!, depending on your opinion of 1800+ word count posts), MCx introduces to the VNX family the much-anticipated, long-awaited (and much joked about) block deduplication.

While I could spend quite a lot of time discussing the implications of this (and I might later), I want to again restrict my comments on block deduplication to its impact on the FAST Suite.

Block Deduplication in MCx is “fixed block” deduplication. In MCx’s case, the fixed block size is 8KB. Here’s a picture of how MCx will see LUNs from the perspective of deduplication.

Block Dedupe - Before

In the above case, we have 3 LUNs (the deduplication container is pool-level), each with a given amount of 8KB blocks. As a background process, MCx will kick off an analysis of these blocks to detect/determine commonality. Once it does so, it will map those common blocks, determine which block will remain in place, and then create pointers for  the places that common block appears in the other LUNs. Once done, the array will reclaim that space, hand it back to the pool, and our scenario looks like this:

Block Dedupe - After

What we immediately notice is space savings, and this is often the first (but hopefully not only) benefit that is mentioned with regard to deduplication.

However, some careful thinking begins to expose the significant benefit block deduplication can have for the FAST Suite.

For FAST VP, MCx will slice at 256MB. It does not, however, take any account whatsoever for what blocks make up that slice from the perspective of commonality. Any number of slices may contain common blocks, and based on the temperature of that slice those common blocks could be spread all over the pool – potentially taking up precious space in your EFD tier as a stowaway. By leveraging block deduplication with FAST VP, we can eliminate that situation entirely. This has the effect of driving an even better TCO for your storage solution.

For FAST Cache, the benefit is of course similar. FAST Cache is precious real estate. If we can help it, we want to avoid having common blocks appear more than once in FAST Cache. And from an intra-pool level, block deduplication in MCx enables us to eliminate that inefficiency and maximize that FAST Cache space.

Conclusion

By leveraging these software features in conjunction with one another – for the appropriate workloads – EMC and its partners can maximize the benefit of the FAST Suite to our customers.

To some, the VNX2 will be dismissed as a “speeds and feeds” upgrade. If that were truly what it is, it would still be welcome. But the introduction of MCx as a replacement for FLARE is much more than that, especially with regard to the performance and efficiency gains to the FAST Suite that it enables. The VNX2 with MCx represents a substantial leap forward for EMC and its partners, and I for one am very excited to share it with our current and future customers as we seek to provide technical solutions to their evolving business requirements.

This entry was posted in EMC, FAST Suite, VNX2. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s