For those of you who bother having my site in an RSS feed for tech stuff, I reviewed vFRC over at my new blog.
Error: Please make sure the Twitter account is public.
For those of you who bother having my site in an RSS feed for tech stuff, I reviewed vFRC over at my new blog.
I’m in the process of transitioning blogs, but while that’s underway I’ll still crosspost relevant technical posts here for those that grab the RSS feed. In the below post, I introduce my latest home lab addition: QNAP’s TS-421.
A brief post that likely falls under the “already knew that” category for most of you, but as I found this setting while poking around in the vSphere Web Client (trying to force myself to learn it), I thought I’d share in the hopes it helps someone along the way.
In my home lab – which recently got an upgrade (blog subject forthcoming) – I’m currently doing some host-based cache solution testing. Currently, I’ve got vFlash Read Cache (vFRC) running on my two hosts. I’ll definitely have more to say about vFRC later, but for now my post is about one requirement of vFRC and how you deal with it.
In order to leverage vFRC, your VMs must be at VM Version 10 (ESXi 5.5 or later). To upgrade existing VMs (whether ones you created, or OVF deployments), just follow these instructions.
However, for new VMs that you’ll create, you can remove the need to follow these post-creation steps by setting the default VM compatibility level in your environment. You do this at the Datacenter level, and it looks like this.
After completing these two steps, you’ll automatically create VMs that are eligible for vFRC (and any other features that require Version 10).
(Note: As you’ll know, during the creation of a VM you are asked for what VM level you want the VM at – you can of course choose whatever you want depending on your needs. The above steps simply default the choice to VM Version 10.)
My colleague, Jason Nash, wrote a really excellent blog post on the recently announced VNX2 line from EMC. I heartily recommend you go read that post first (if you haven’t already) and then come back here. My post is going to have a narrower focus than Jason’s, in part because it’s what I had always intended to do and (now) in part because there’s simply no point in doing a general overview now that Jason nailed his.
What I want to discuss in this post is how the new Block OE (MCx) and the features it presents and enables makes the FAST Suite even more of a game-changer than it already is.
If I’ve had the privilege of presenting/whiteboarding with you around the VNX product line, you know that one of the first things I talk about at length is the FAST Suite. It is, in my opinion, the key differentiator between EMC and everyone else in the mid-tier array market. Product release schedules aside, disks are disks, DAE’s are DAE’s, CPUs are CPUs no matter what badge is on the name plate. The difference is software, and the FAST Suite is the absolute best array-based software package I’ve seen and MCx kicks it up a significant notch.
Before diving into how MCx kicks the FAST Suite into a new gear, let’s be sure we’re straight on what it is. MCx stands for “Multicore Everything” and it is a from-the-ground-up re-write of the EMC Block Operating Environment on the VNX. Here’s a pretty picture that shows what the former OE looked like, and what MCx’s introduction does to it.
Notice here just two things for now. First, note that FAST Cache is now a part of the OE in MCx where it sat above the OE in FLARE, and also that the OE is no longer monolithic. This, as Jason pointed out (you did go read his post, right?), will allow for among other things better scaling.
MCx Features and the FAST Suite
By now, most of us understand what FAST VP is. From a business value perspective, it is the “Lower your TCO” feature of the FAST Suite (FAST Cache being the “Go Faster!” component). FAST VP is EMC’s implementation of Fully Automated Storage Tiering (FAST), which has been around for years now. This feature was created to address the lifecycle/access pattern of data that exists in an organization (and thus on a shared storage platform).
The simple truth that many still miss is that LUNs are not monolithic from the perspective of the data. Some data/blocks/slices within the LUN will be “hot” (accessed frequently/regularly) and some data/blocks/slices will be “cold” (stale, accessed infrequently) and this temperature reading changes over time. FAST VP accounts for this by viewing LUNs not as a whole, but as individual slices making up the whole.
An example will make this plainer. Take a 100GB LUN that contains (for our purposes) a 100GB SQL Database. Without any storage tiering, we must make the determination of what tier of storage to land this LUN on. FAST VP eliminates the risk of aiming low (and killing performance) or aiming high (and killing your TCO).
On the original VNX with FLARE, FAST VP would view our SQL LUN as 100 slices of 1GB each. That’s pretty good (ask any of our customers and they’ll tell you). If we have a three-tier pool (EFD/SAS/NL-SAS), FAST VP can then apply the policy we select to spread those 100 slices between the tiers, and then at defined intervals (8 hours by default), relocate those slices to the appropriate tier (if necessary) based on observed access patterns. This ensures the most valuable data to your business (that is, the most frequently accessed data) lives on the fastest tier of storage available to you LUN, and the least valuable data (that is, the least frequently accessed data) lives on the lowest tier of storage.
On the new VNX with MCx, FAST VP would view this same 100GB LUN as 400 slices at 256MB per slice. We see marketing details run amok all the time (no one hates “Marchitecture” more than me), but this is just basic math. That’s a 4X improvement over the previous FAST VP granularity all because the underlying OE has the ability to track more slices due to the processing power of multiple cores. So now, with MCx’s FAST VP granularity, we are able to hand our customers a 4X improvement in their confidence that their data is sitting on the appropriate tier of storage that they’ve invested in. This is a very, very good thing.
FAST Cache may not be the greatest thing since sliced bread, but in the mid-tier storage space at least, it’s pretty darn close. Applications thrive on one thing: lowest possible transaction response time. The quicker you get me that read or write IO, the quicker I can move on to the next one, and so on. To do this, we want to service as many reads and writes out of our array cache as we can (or if we want to get really fancy, do it in cache at the compute side; but that’s a topic for another day). The problem here is obvious once stated: there is a finite amount of cache (DRAM) that can be placed on an SP (Storage/Service Processor), and it’s dead expensive. So you typically get a very conservative amount in the mid-tier, and then you’re done. This might be, say, 8-32GB of DRAM (or higher) depending on the size of the array.
FAST Cache solves this problem by acting as an extension of the array’s DRAM. And, unique in the mid-tier, it does that for both reads and writes. The importance of those italics cannot be overstated. Reads accelerated in cache are wonderful, but writes accelerated in cache? Game-changer.
In FLARE, FAST Cache worked (and works) very, very well. Blocks (64KB in size) are eligible for promotion to the FAST Cache tier (backed by EFDs) upon the 3rd request for that block. Once there, it is active in FAST Cache until it ages out (Least Recently Used is the algorithm).
In MCx, FAST Cache introduces a temporary suspension of the “3 hits to promote” rule during the Cache Warming phase. On initial creation of the FAST Cache space, blocks are eligible for promotion immediately (and on the first request) until such time as the FAST Cache utilization reaches 80%. Once there, MCF (Multicore FAST Cache) returns to the “3 hits to promote” rule. This is valuable to the customer because they will see the benefits of FAST Cache immediately. No more waiting for that warming period to complete before you get the benefits of FAST Cache. Enable it, use it. All thanks to the increased power of MCx. Here’s another pretty picture describing this new feature.
In addition to the warmup functionality, the biggest improvement I see in FAST Cache is the re-ordering of how a Write IO comes into the array. Before I explain the differences, let’s look at another picture.
In FLARE, to commit an incoming (write) IO from a host, we had to first land the IO in the FAST Cache Memory Map, then send it down to Cache. Only after these two steps would an acknowledgment be sent back to the host that the IO had been committed.
In MCx, we eliminate the extra step (and, thus, extra latency) by receiving the host IO directly into MCC (Multicore Cache), and then immediately acknowledging the IO back to the host. This in and of itself may not sound like a huge improvement (there is much more here, and I may detail more in a subsequent post), but when applications thrive on low latency, and an internal process can remove one of two steps in acknowledging an IO from an application/host, it qualifies as a welcome and significant improvement in performance (and, again, scaling).
FAST Cache, FAST VP, and Block Deduplication
Lastly (or finally!, depending on your opinion of 1800+ word count posts), MCx introduces to the VNX family the much-anticipated, long-awaited (and much joked about) block deduplication.
While I could spend quite a lot of time discussing the implications of this (and I might later), I want to again restrict my comments on block deduplication to its impact on the FAST Suite.
Block Deduplication in MCx is “fixed block” deduplication. In MCx’s case, the fixed block size is 8KB. Here’s a picture of how MCx will see LUNs from the perspective of deduplication.
In the above case, we have 3 LUNs (the deduplication container is pool-level), each with a given amount of 8KB blocks. As a background process, MCx will kick off an analysis of these blocks to detect/determine commonality. Once it does so, it will map those common blocks, determine which block will remain in place, and then create pointers for the places that common block appears in the other LUNs. Once done, the array will reclaim that space, hand it back to the pool, and our scenario looks like this:
What we immediately notice is space savings, and this is often the first (but hopefully not only) benefit that is mentioned with regard to deduplication.
However, some careful thinking begins to expose the significant benefit block deduplication can have for the FAST Suite.
For FAST VP, MCx will slice at 256MB. It does not, however, take any account whatsoever for what blocks make up that slice from the perspective of commonality. Any number of slices may contain common blocks, and based on the temperature of that slice those common blocks could be spread all over the pool – potentially taking up precious space in your EFD tier as a stowaway. By leveraging block deduplication with FAST VP, we can eliminate that situation entirely. This has the effect of driving an even better TCO for your storage solution.
For FAST Cache, the benefit is of course similar. FAST Cache is precious real estate. If we can help it, we want to avoid having common blocks appear more than once in FAST Cache. And from an intra-pool level, block deduplication in MCx enables us to eliminate that inefficiency and maximize that FAST Cache space.
By leveraging these software features in conjunction with one another – for the appropriate workloads – EMC and its partners can maximize the benefit of the FAST Suite to our customers.
To some, the VNX2 will be dismissed as a “speeds and feeds” upgrade. If that were truly what it is, it would still be welcome. But the introduction of MCx as a replacement for FLARE is much more than that, especially with regard to the performance and efficiency gains to the FAST Suite that it enables. The VNX2 with MCx represents a substantial leap forward for EMC and its partners, and I for one am very excited to share it with our current and future customers as we seek to provide technical solutions to their evolving business requirements.
I changed my Twitter handle today. What used to be @the_hhg, is now @linetracer. I thought I’d throw up a brief blog post as to why.
For the past two months, I’ve been thinking about what it is in life that I do at a basic, fundamental, level. At some training a few months back, I landed on ‘builder’. And I still think that gets very close. But something just felt like it was missing. In the end, I think for a one word answer ‘builder’ is about as good as I’ll do.
Last week, I had the privilege of meeting with a couple from my church who I am called to care for as members of my ‘flock’. This couple is struggling (well) through some really, really hard times and decisions. As I was preparing to meet with them, I wrote down three principles or truths that would, I prayed, help them as they navigate through this latest round of decisions. And as I was preparing to meet with them, the two months of thinking came together.
So if I’m allowed a fuller answer than the one word ‘builder’, I’d say this: I trace lines laid down by others. Sometimes, I lay down my “own” lines (I don’t think for a second they’re original to me). But with those lines I always do the same thing: I seek to make connections that add benefit to others and myself (hopefully in that order).
So for now, that’s my story of who I am in all areas of my life (faith, family, career, hobbies, etc.). And now my Twitter handle reflects it.
For those of you who have the displeasure of knowing me in non-professional settings (or professional, for that matter, but this is beside the current point), you’ll know I like to wear fun/clever/fictional/superhero type t-shirts. It’s kind of my thing (Currently Wearing: This.). I spend far too much money on them, and am a complete snob when it comes to the type I’ll wear (think: soft, tri-blends, etc.).
For some who have known me for a long time, this is somewhat unexpected. I come across (rightly, in most cases) as a pretty straight-laced, logical, non-whimsical, no-nonsense kind of guy. But get to know me, and you’ll find I’ve got a part of my personality that I’ve cultivated that is purposefully different than my more visible parts. And one of the pieces of that part of my personality is a love for superheroes and comics.
This is not a part of my conscious personality that has much age to it – about 3-4 years in fact. Or whenever Ironman released in the theater (IMDB tells me that was 2008). And it’s a part that has grown slowly, but is now a regular and enjoyable part of my pleasure time (which is rare, and chiefly reading or watching movies/shows after the family is asleep).
I get my comics (known as a Pull List if you reserve them) from a famous, and local, comic shop here in Charlotte. Heroes Aren’t Hard to Find is a great shop, run by a great group of folks. My Pull List has grown over the last year (too much, in fact), but I thought I’d list it here. I was going to rank them, but I don’t think that’ll work. I pull each of these for specific reasons, and those reasons are not in competition with the others, so ranking them doesn’t seem to make a whole lot of sense. And all it takes is one good arc, or really one amazing issue, and I’d need to re-rank. But make no mistake: I do have my favorites. Just ask me. I’m not shy.
Current Pull List
Lastly, I’m currently reading The Sixth Gun in catch-up mode. I’m way behind (just finished Issue 6 and there are 30+), but I found out about the series from Jason Aaron and grabbed Volume 1. It is incredibly good. I think I’m going to write a blog post on it at some point – really creative work by Bunn, and Brian Hurtt’s art is incredible.
Now…what should I read tonight…
For Christmas, my brother-in-law bought me a 3-day Advanced Pass to HeroesCon here in Charlotte. I had a really good time, and thought I’d share a few pictures and comments.
On Friday, I only had about an hour to get registered and get the lay of the land, so my first stop was the Artist and Vendor Hall.
I had barely turned the corner when I ran into a couple of Empire employees. This is when I knew I was in the right place.
The first vendor I stopped by was Ian Leino’s booth. Ian is a T-Shirt Artist out of Asheville, NC and you can visit his site here. Ian had a bunch of great designs, and I bought the above. Ian has both great designs, but also uses American Apparel t-shirts. There’s nothing worse than a well-designed t-shirt sitting on scratchy, thick shirts. Ian gets this, and does great work.
My last stop of the first day was at Will Pigg’s booth. Will, amongst other things, does Paper Craft. Being a Batman fanatic, this was too good to pass up. Will had lots of other great things, but since I’d been there about 45 minutes and already made two purchases, I stopped while I was ‘ahead’.
On Saturday, I was able to attend the Marvel Writer’s Panel. From right to left is Jason Aaron, Matt Fraction, Kelly Sue DeConnick, and Jonathan Hickman. The panel was excellent, and as I’m currently reading two of Jason Aaron’s titles (Thor and Thanos Rising), as well as having just finished reading two of Matt Fraction’s (Thor and AvX), I was really looking forward to it. It was great to hear their thoughts, plans, and writing/work styles.
CosPlay! Volstagg was epic.
On Saturday night, I was able to go to the Art Auction. Didn’t buy anything, but there were a couple of cool pieces that I liked. Above are two.
Today was a day filled with walking around the Convention Hall, talking with artists and writers (talked to Jason Aaron twice), and bought a few things. One of the things I bought was for my son. He’s really into Silver Surfer (in that there’s a poster of him in my office, and he loves it), so I commissioned Buddy Prince to draw a likeness of my son as Silver Surfer for a keepsake for HeroesCon 2013. I think it’s hilarious and well done, no least of which, the silver diaper!
See you next year, HeroesCon!
You’ve heard of Sisyphus and his punishment at the hands of Zeus right? That Sisyphus would, for all eternity, roll an enchanted boulder to the top of a steep hill, only to see it too heavy for him to crest the hill and roll back down? This part of Greek Mythology is so well-known that we use the idea as a descriptor for tasks we feel are pointless or doomed to failure. We call those tasks, sisyphean.
Apropos of nothing I assure you, this brings me to the topic of my blog post: Microsoft Licensing for VDI.
Quite a bit more often than I might wish it to, I come across questions related to how Microsoft products are licensed in VDI. As a result, I thought I would set down some of my research here, in hopes that it can be helpful to someone out there who might need it.
(Note: It doesn’t matter who the vendor is; a VDI solution follows the same set of rules regardless of who it comes from – be it VMware or Citrix or Microsoft or Someone Else).
Licensing Windows for Use in VDI
Microsoft licenses their OS by device. This means you will need an appropriate license for accessing Windows on each end user device (be it PC/Laptop or Thin/Zero Client) that will access your virtual infrastructure. However, there are two models for obtaining these rights. One for physical PCs and Laptops, and one for Thin/Zero Clients.
Physical PCs and Laptops
For Physical PCs/Laptops, you must have a Windows license with Software Assurance. As of July 2010, the right to access a virtual desktop via Windows became a right obtained via Windows Client Software Assurance (SA). So, if you have a PC that you wish to connect to your VDI solution, and that PC’s Windows installation is covered by a license with SA, you may access a virtual desktop with no additional licensing requirements.
If however you find yourself in the same situation as the previous paragraph, and you do not have SA, you should contact your Microsoft Sales Representative to discuss how you can bring those PCs/Laptops under an active SA agreement.
For Thin Clients and Zero Clients, you cannot use SA as you are required to above for the simple reason that Thin/Zero Clients are not eligible for SA via Microsoft. As a result, you must license your end point devices in this scenario with a license called Virtual Desktop Access (VDA).
VDA is a subscription-based license. It grants the end point it is assigned to rights to access a virtual desktop with any supported Windows desktop OS. To purchase VDA, you have two options.
Option One is the annuity subscription. This, as it sounds, is a yearly subscription. You pay a yearly subscription fee annually for as long as you need the VDA license (or as long as the VDA license exists).
Option Two is the full-pay subscription. This option allows you to pay for the VDA license for three years, upfront.
Microsoft, in my opinion, doesn’t do much in the way of multi-year discounting here, so the primary decision is around budget cycles and how your organization prefers to pay for on-going software costs.
Licensing Microsoft Office in VDI
Similar to Windows licensing, Microsoft Office is licensed by device. So, just as above, you must have a license of Microsoft Office for each device that will access one or more titles of the Office Suite.
This particular requirement can be a real frustration for VDI Administrators when they attempt to control who in their organization has rights to access certain applications.
There is a reasonable logic that would say “I have 100 copies of Microsoft Office. I have virtualized Microsoft Office, and will meter its use to 100 (or less) concurrent connections by assigning the application to a 100 VM pool. This will keep me compliant.”
However, if those 100 VMs can be accessed from more than 100 end points devices, then compliance will potentially be compromised. For those organizations with the same number of devices as users, all is well. But if, like so many organizations, the number of VDI end points is greater than the number of VDI users, and those users access those devices, you will be out of compliance once the number of devices used to access Office exceeds the number of licenses owned.
In two words, the takeaway is that Microsoft licenses Windows and Office by device. Starting there, you can then decide which path to take for Windows depending on the type of device, and can work out your delivery scenarios for Office to those devices and remain in compliance. And, standard caveat, if you are still unsure and want a sanity check, hit up your Microsoft resource for assistance.
This guide is probably the most helpful one I’ve seen in detailing out how to license properly for VDI. What I’ve written above around Windows is little more than a summary of this guide (and hopefully a little clearer due to brevity).
For information on Microsoft Office Volume Licensing, please see this PDF. Page 2 notes that Office is licensed per device, and gives further guidance on scenarios, etc.
As a way to collect my thoughts after finishing a book, I’ll post a review here. I’ll do two reviews: an ADD review, and a regular review.
Michael Pollan is well-known for his three basic rules of eating. Eat Food. Not Too Much. Mostly Plants. Those well-known rules come from his book In Defense of Food: An Eater’s Manifesto.
Pollan begins the book after a brief introduction by tracing the history of nutritionism. Nutritionism, according to Pollan, is “the widely shared but unexamined assumption…that the key to understanding food is…the nutrient. Put another way: Foods are essentially the sum of their nutrient parts.” (pg 28)
From this presupposition, we get our modern Western Diet. We are, without question, the most health-conscious society that’s ever lived (just look at the Diet Industry revenues as Exhibit A). And yet, as time wears on, we are increasingly looking like one of the least healthy societies of all-time (especially when viewed against the backdrop of the availability of nutritious food.)
Adopters of the Western Diet (highly processed foods and refined grains; chemicals to raise plants and animals monoculturally; cheap calories of sugar and fat; the narrowing of our diet to staple crops like wheat, corn, and soy) end up looking remarkably similar. And that similarity is unhealthy. Put plainly, where the Western Diet goes, a class of diseases known as Western Diseases follow. Those diseases are: obesity, diabetes, cardiovascular diseases, and cancer.
So against the backdrop of nutritionism, its offspring the Western Diet, and the consequences that follow, Pollan makes a simple plea to return to common sense:
So on whose authority do I purport to speak? I speak mainly on the authority of tradition and common sense. Most of what we need to know about how to eat we already know, or once did until we allowed the nutrition experts and the advertisers to shake our confidence in common sense, tradition, the testimony of our senses, and the wisdom of our mothers and grandmothers. (Pg. 13)
In my opinion, this is the strongest part of Pollan’s argument. To undo the pernicious affects of nutritionism and the Western Diet we do not need a counterpoint of equal complexity. Eating should not be complex; we do not need people to tell us what to eat. As humans, we’ve been eating for 1000s and 1000s of years. I do not subscribe to the theory of macroevolution, but I do subscribe to the reality of microevolution. We have adapted, as humans, to survive. We’ve eaten what makes us healthy, marked off what does not, and generally done a good job of determining what is what. And we did that culturally and societally and without any help from a lab. Until now. It’s not hard to see that as soon as we listened to manufacturers instead of our mother’s on what to eat, we got in a bad place.
Pollan, in his book, and with his three simple rules for eating, is calling us back to our roots. And he does this in two ways. First, to go back to eating food our ancestors would recognize as food, and second, to go back to eating food the way our ancestors did – around the table, with friends and families, as an act of community.
For me, the first way means I will stop eating food, and stop spending money, on things that aren’t really food. As much as I like (or my pride likes) to think I’m smarter than the advertisers, they’ve hooked me too. I won’t name what I’ve spent my money on (they aren’t bad, so don’t deserve to be named), but I have discontinued purchasing them, and have replaced them with real food. It’s less expensive, more varied, and feeds into number two.
Eating real food means I can share it with my family. My wife wouldn’t touch some of the nutritionistic ‘food’ I was eating, and it wasn’t really shareable anyway. But now, I can go to the farmers market with my son, we can buy new potatoes and peppers and swiss chard, and I can come home, grab a pan and some olive oil, a few eggs, and make a nice meal of farm veggies with scrambled egg over the top. And I can reach down, hand a bite to my son, and watch him smile as we share a meal together (even if he does spit some of it out). And this, more than anything, may be the cure to viewing food not as a sum of its parts, but as a whole, to be enjoyed by the whole, for our mutual good.
How I Got to Juicing
As I mentioned in my last post, I’ve decided one way I’m going to attempt to improve my health is by juicing fruits and vegetables. This is, admittedly, a shortcut to where I want to be. But, while I work to create space in other aspects of my life, this allows me to introduce micronutrients at a larger scale into my diet without causing major disruption to my overall schedule (which needs a major disruption – but all things in their time).
My journey to this point started pretty innocently. For reasons that still aren’t clear, I began to find myself watching documentaries, and reading blogs/articles related to one of two things. Negatively, I was watching the Western Diet get eviscerated scientifically and anecdotally, and positively, I was watching/reading compelling stories of people who made simple (but not easy) changes and saw incredible results.
Eventually, these two things came together, and I became convinced it was time for some change. I’m not into diets (they are designed for failure), but I am into healthy, sustainable changes. And juicing is stage one.
How I’m Juicing
The juicer I’m using is a Breville Juice Fountain Multi-Speed. Good ratings on Amazon.com, Consumer Reports, and consistently saw that people thought clean-up was pretty easy while being happy with output of their juicing. After a full week of using the Breville, I can say that clean up really is pretty simple. The juicer disassembles easily, there aren’t an excessive amount of nooks and crannies that take careful attention, and even the filter (while the hardest part to clean) is easy enough to clean with the included brush.
As for it’s ability to juice, I have nothing to compare it to, but the unit seems to do a good job. I’ve juiced apples, pears, oranges, lemons, ginger, cucumbers, celery, kale, and spinach so far, and all the pulp they left behind is damp, but not juicy. I’m spending money on organic produce (more on how much I’m spending on average per juice in a later post), so it is important to me that I be able to extract efficiently. At this point, I’m convinced it’s doing just that.
Reminding Myself Why I’m Doing This
I took J to Freedom Park today for an hour. While we were there we walked over to the three baseball fields. Each had a game going, and J enjoyed watching and yelling “bah” (that’s ‘ball’ to the uninitiated). I however looked around and saw all the parents watching their children, and was reminded again why I’m making these changes. I want to be around for J as he grows up, I want to grow old with my wife, and I want to be of use in my life wherever the Lord places me. It’s moments like today, when I saw a picture of what might be in a few years, that motivate me to keep going, and create space for my health.