For those of you who bother having my site in an RSS feed for tech stuff, I reviewed vFRC over at my new blog.
Error: Twitter did not respond. Please wait a few minutes and refresh this page.
For those of you who bother having my site in an RSS feed for tech stuff, I reviewed vFRC over at my new blog.
I’m in the process of transitioning blogs, but while that’s underway I’ll still crosspost relevant technical posts here for those that grab the RSS feed. In the below post, I introduce my latest home lab addition: QNAP’s TS-421.
A brief post that likely falls under the “already knew that” category for most of you, but as I found this setting while poking around in the vSphere Web Client (trying to force myself to learn it), I thought I’d share in the hopes it helps someone along the way.
In my home lab – which recently got an upgrade (blog subject forthcoming) – I’m currently doing some host-based cache solution testing. Currently, I’ve got vFlash Read Cache (vFRC) running on my two hosts. I’ll definitely have more to say about vFRC later, but for now my post is about one requirement of vFRC and how you deal with it.
In order to leverage vFRC, your VMs must be at VM Version 10 (ESXi 5.5 or later). To upgrade existing VMs (whether ones you created, or OVF deployments), just follow these instructions.
However, for new VMs that you’ll create, you can remove the need to follow these post-creation steps by setting the default VM compatibility level in your environment. You do this at the Datacenter level, and it looks like this.
After completing these two steps, you’ll automatically create VMs that are eligible for vFRC (and any other features that require Version 10).
(Note: As you’ll know, during the creation of a VM you are asked for what VM level you want the VM at – you can of course choose whatever you want depending on your needs. The above steps simply default the choice to VM Version 10.)
My colleague, Jason Nash, wrote a really excellent blog post on the recently announced VNX2 line from EMC. I heartily recommend you go read that post first (if you haven’t already) and then come back here. My post is going to have a narrower focus than Jason’s, in part because it’s what I had always intended to do and (now) in part because there’s simply no point in doing a general overview now that Jason nailed his.
What I want to discuss in this post is how the new Block OE (MCx) and the features it presents and enables makes the FAST Suite even more of a game-changer than it already is.
If I’ve had the privilege of presenting/whiteboarding with you around the VNX product line, you know that one of the first things I talk about at length is the FAST Suite. It is, in my opinion, the key differentiator between EMC and everyone else in the mid-tier array market. Product release schedules aside, disks are disks, DAE’s are DAE’s, CPUs are CPUs no matter what badge is on the name plate. The difference is software, and the FAST Suite is the absolute best array-based software package I’ve seen and MCx kicks it up a significant notch.
Before diving into how MCx kicks the FAST Suite into a new gear, let’s be sure we’re straight on what it is. MCx stands for “Multicore Everything” and it is a from-the-ground-up re-write of the EMC Block Operating Environment on the VNX. Here’s a pretty picture that shows what the former OE looked like, and what MCx’s introduction does to it.
Notice here just two things for now. First, note that FAST Cache is now a part of the OE in MCx where it sat above the OE in FLARE, and also that the OE is no longer monolithic. This, as Jason pointed out (you did go read his post, right?), will allow for among other things better scaling.
MCx Features and the FAST Suite
By now, most of us understand what FAST VP is. From a business value perspective, it is the “Lower your TCO” feature of the FAST Suite (FAST Cache being the “Go Faster!” component). FAST VP is EMC’s implementation of Fully Automated Storage Tiering (FAST), which has been around for years now. This feature was created to address the lifecycle/access pattern of data that exists in an organization (and thus on a shared storage platform).
The simple truth that many still miss is that LUNs are not monolithic from the perspective of the data. Some data/blocks/slices within the LUN will be “hot” (accessed frequently/regularly) and some data/blocks/slices will be “cold” (stale, accessed infrequently) and this temperature reading changes over time. FAST VP accounts for this by viewing LUNs not as a whole, but as individual slices making up the whole.
An example will make this plainer. Take a 100GB LUN that contains (for our purposes) a 100GB SQL Database. Without any storage tiering, we must make the determination of what tier of storage to land this LUN on. FAST VP eliminates the risk of aiming low (and killing performance) or aiming high (and killing your TCO).
On the original VNX with FLARE, FAST VP would view our SQL LUN as 100 slices of 1GB each. That’s pretty good (ask any of our customers and they’ll tell you). If we have a three-tier pool (EFD/SAS/NL-SAS), FAST VP can then apply the policy we select to spread those 100 slices between the tiers, and then at defined intervals (8 hours by default), relocate those slices to the appropriate tier (if necessary) based on observed access patterns. This ensures the most valuable data to your business (that is, the most frequently accessed data) lives on the fastest tier of storage available to you LUN, and the least valuable data (that is, the least frequently accessed data) lives on the lowest tier of storage.
On the new VNX with MCx, FAST VP would view this same 100GB LUN as 400 slices at 256MB per slice. We see marketing details run amok all the time (no one hates “Marchitecture” more than me), but this is just basic math. That’s a 4X improvement over the previous FAST VP granularity all because the underlying OE has the ability to track more slices due to the processing power of multiple cores. So now, with MCx’s FAST VP granularity, we are able to hand our customers a 4X improvement in their confidence that their data is sitting on the appropriate tier of storage that they’ve invested in. This is a very, very good thing.
FAST Cache may not be the greatest thing since sliced bread, but in the mid-tier storage space at least, it’s pretty darn close. Applications thrive on one thing: lowest possible transaction response time. The quicker you get me that read or write IO, the quicker I can move on to the next one, and so on. To do this, we want to service as many reads and writes out of our array cache as we can (or if we want to get really fancy, do it in cache at the compute side; but that’s a topic for another day). The problem here is obvious once stated: there is a finite amount of cache (DRAM) that can be placed on an SP (Storage/Service Processor), and it’s dead expensive. So you typically get a very conservative amount in the mid-tier, and then you’re done. This might be, say, 8-32GB of DRAM (or higher) depending on the size of the array.
FAST Cache solves this problem by acting as an extension of the array’s DRAM. And, unique in the mid-tier, it does that for both reads and writes. The importance of those italics cannot be overstated. Reads accelerated in cache are wonderful, but writes accelerated in cache? Game-changer.
In FLARE, FAST Cache worked (and works) very, very well. Blocks (64KB in size) are eligible for promotion to the FAST Cache tier (backed by EFDs) upon the 3rd request for that block. Once there, it is active in FAST Cache until it ages out (Least Recently Used is the algorithm).
In MCx, FAST Cache introduces a temporary suspension of the “3 hits to promote” rule during the Cache Warming phase. On initial creation of the FAST Cache space, blocks are eligible for promotion immediately (and on the first request) until such time as the FAST Cache utilization reaches 80%. Once there, MCF (Multicore FAST Cache) returns to the “3 hits to promote” rule. This is valuable to the customer because they will see the benefits of FAST Cache immediately. No more waiting for that warming period to complete before you get the benefits of FAST Cache. Enable it, use it. All thanks to the increased power of MCx. Here’s another pretty picture describing this new feature.
In addition to the warmup functionality, the biggest improvement I see in FAST Cache is the re-ordering of how a Write IO comes into the array. Before I explain the differences, let’s look at another picture.
In FLARE, to commit an incoming (write) IO from a host, we had to first land the IO in the FAST Cache Memory Map, then send it down to Cache. Only after these two steps would an acknowledgment be sent back to the host that the IO had been committed.
In MCx, we eliminate the extra step (and, thus, extra latency) by receiving the host IO directly into MCC (Multicore Cache), and then immediately acknowledging the IO back to the host. This in and of itself may not sound like a huge improvement (there is much more here, and I may detail more in a subsequent post), but when applications thrive on low latency, and an internal process can remove one of two steps in acknowledging an IO from an application/host, it qualifies as a welcome and significant improvement in performance (and, again, scaling).
FAST Cache, FAST VP, and Block Deduplication
Lastly (or finally!, depending on your opinion of 1800+ word count posts), MCx introduces to the VNX family the much-anticipated, long-awaited (and much joked about) block deduplication.
While I could spend quite a lot of time discussing the implications of this (and I might later), I want to again restrict my comments on block deduplication to its impact on the FAST Suite.
Block Deduplication in MCx is “fixed block” deduplication. In MCx’s case, the fixed block size is 8KB. Here’s a picture of how MCx will see LUNs from the perspective of deduplication.
In the above case, we have 3 LUNs (the deduplication container is pool-level), each with a given amount of 8KB blocks. As a background process, MCx will kick off an analysis of these blocks to detect/determine commonality. Once it does so, it will map those common blocks, determine which block will remain in place, and then create pointers for the places that common block appears in the other LUNs. Once done, the array will reclaim that space, hand it back to the pool, and our scenario looks like this:
What we immediately notice is space savings, and this is often the first (but hopefully not only) benefit that is mentioned with regard to deduplication.
However, some careful thinking begins to expose the significant benefit block deduplication can have for the FAST Suite.
For FAST VP, MCx will slice at 256MB. It does not, however, take any account whatsoever for what blocks make up that slice from the perspective of commonality. Any number of slices may contain common blocks, and based on the temperature of that slice those common blocks could be spread all over the pool – potentially taking up precious space in your EFD tier as a stowaway. By leveraging block deduplication with FAST VP, we can eliminate that situation entirely. This has the effect of driving an even better TCO for your storage solution.
For FAST Cache, the benefit is of course similar. FAST Cache is precious real estate. If we can help it, we want to avoid having common blocks appear more than once in FAST Cache. And from an intra-pool level, block deduplication in MCx enables us to eliminate that inefficiency and maximize that FAST Cache space.
By leveraging these software features in conjunction with one another – for the appropriate workloads – EMC and its partners can maximize the benefit of the FAST Suite to our customers.
To some, the VNX2 will be dismissed as a “speeds and feeds” upgrade. If that were truly what it is, it would still be welcome. But the introduction of MCx as a replacement for FLARE is much more than that, especially with regard to the performance and efficiency gains to the FAST Suite that it enables. The VNX2 with MCx represents a substantial leap forward for EMC and its partners, and I for one am very excited to share it with our current and future customers as we seek to provide technical solutions to their evolving business requirements.
I changed my Twitter handle today. What used to be @the_hhg, is now @linetracer. I thought I’d throw up a brief blog post as to why.
For the past two months, I’ve been thinking about what it is in life that I do at a basic, fundamental, level. At some training a few months back, I landed on ‘builder’. And I still think that gets very close. But something just felt like it was missing. In the end, I think for a one word answer ‘builder’ is about as good as I’ll do.
Last week, I had the privilege of meeting with a couple from my church who I am called to care for as members of my ‘flock’. This couple is struggling (well) through some really, really hard times and decisions. As I was preparing to meet with them, I wrote down three principles or truths that would, I prayed, help them as they navigate through this latest round of decisions. And as I was preparing to meet with them, the two months of thinking came together.
So if I’m allowed a fuller answer than the one word ‘builder’, I’d say this: I trace lines laid down by others. Sometimes, I lay down my “own” lines (I don’t think for a second they’re original to me). But with those lines I always do the same thing: I seek to make connections that add benefit to others and myself (hopefully in that order).
So for now, that’s my story of who I am in all areas of my life (faith, family, career, hobbies, etc.). And now my Twitter handle reflects it.
For those of you who have the displeasure of knowing me in non-professional settings (or professional, for that matter, but this is beside the current point), you’ll know I like to wear fun/clever/fictional/superhero type t-shirts. It’s kind of my thing (Currently Wearing: This.). I spend far too much money on them, and am a complete snob when it comes to the type I’ll wear (think: soft, tri-blends, etc.).
For some who have known me for a long time, this is somewhat unexpected. I come across (rightly, in most cases) as a pretty straight-laced, logical, non-whimsical, no-nonsense kind of guy. But get to know me, and you’ll find I’ve got a part of my personality that I’ve cultivated that is purposefully different than my more visible parts. And one of the pieces of that part of my personality is a love for superheroes and comics.
This is not a part of my conscious personality that has much age to it – about 3-4 years in fact. Or whenever Ironman released in the theater (IMDB tells me that was 2008). And it’s a part that has grown slowly, but is now a regular and enjoyable part of my pleasure time (which is rare, and chiefly reading or watching movies/shows after the family is asleep).
I get my comics (known as a Pull List if you reserve them) from a famous, and local, comic shop here in Charlotte. Heroes Aren’t Hard to Find is a great shop, run by a great group of folks. My Pull List has grown over the last year (too much, in fact), but I thought I’d list it here. I was going to rank them, but I don’t think that’ll work. I pull each of these for specific reasons, and those reasons are not in competition with the others, so ranking them doesn’t seem to make a whole lot of sense. And all it takes is one good arc, or really one amazing issue, and I’d need to re-rank. But make no mistake: I do have my favorites. Just ask me. I’m not shy.
Current Pull List
Lastly, I’m currently reading The Sixth Gun in catch-up mode. I’m way behind (just finished Issue 6 and there are 30+), but I found out about the series from Jason Aaron and grabbed Volume 1. It is incredibly good. I think I’m going to write a blog post on it at some point – really creative work by Bunn, and Brian Hurtt’s art is incredible.
Now…what should I read tonight…
For Christmas, my brother-in-law bought me a 3-day Advanced Pass to HeroesCon here in Charlotte. I had a really good time, and thought I’d share a few pictures and comments.
On Friday, I only had about an hour to get registered and get the lay of the land, so my first stop was the Artist and Vendor Hall.
I had barely turned the corner when I ran into a couple of Empire employees. This is when I knew I was in the right place.
The first vendor I stopped by was Ian Leino’s booth. Ian is a T-Shirt Artist out of Asheville, NC and you can visit his site here. Ian had a bunch of great designs, and I bought the above. Ian has both great designs, but also uses American Apparel t-shirts. There’s nothing worse than a well-designed t-shirt sitting on scratchy, thick shirts. Ian gets this, and does great work.
My last stop of the first day was at Will Pigg’s booth. Will, amongst other things, does Paper Craft. Being a Batman fanatic, this was too good to pass up. Will had lots of other great things, but since I’d been there about 45 minutes and already made two purchases, I stopped while I was ‘ahead’.
On Saturday, I was able to attend the Marvel Writer’s Panel. From right to left is Jason Aaron, Matt Fraction, Kelly Sue DeConnick, and Jonathan Hickman. The panel was excellent, and as I’m currently reading two of Jason Aaron’s titles (Thor and Thanos Rising), as well as having just finished reading two of Matt Fraction’s (Thor and AvX), I was really looking forward to it. It was great to hear their thoughts, plans, and writing/work styles.
CosPlay! Volstagg was epic.
On Saturday night, I was able to go to the Art Auction. Didn’t buy anything, but there were a couple of cool pieces that I liked. Above are two.
Today was a day filled with walking around the Convention Hall, talking with artists and writers (talked to Jason Aaron twice), and bought a few things. One of the things I bought was for my son. He’s really into Silver Surfer (in that there’s a poster of him in my office, and he loves it), so I commissioned Buddy Prince to draw a likeness of my son as Silver Surfer for a keepsake for HeroesCon 2013. I think it’s hilarious and well done, no least of which, the silver diaper!
See you next year, HeroesCon!