Nutanix Offering Per Desktop VDI Pricing with Guaranteed Performance

The first infrastructure vendor in the industry to offer something like this, Nutanix is blazing yet another trail! Introducing Nutanix VDI Per Desktop and Assurance.

chooseYourPackage

The biggest problems that plague VDI deployments can be summed up by a few key things:

  • Price
  • Performance
  • Uncertainty

Let’s leave price alone for a second, because we will talk about that once we go through the per desktop VDI pricing model. Instead, let’s talk about performance and uncertainty, because even if you get a great price these two things can torpedo a well-meaning VDI deployment.

Performance

Performance covers so many aspects of a VDI deployment. This is not just what any given infrastructure can provide, but also what your users actually need in order to do their job with any level of efficiency. It should also include the speed at which the VDI administration team can react to user requirements, change, and overall management of that environment.

Understanding what resources each desktop will need usually requires performing some sort of assessment of a subset of the users daily usage that will be using the VDI desktops. If the customer will have multiple different use cases, then a subset of users from each group should be assessed. This subset of users should be selected based on providing a solid average of that groups usage profile. An assessment should be a minimum of 2 weeks, while 30 days is generally more than enough.

An IT department’s ability to meet changing demands is generally dictated by two factors: The policies and procedures to implement change, and how quickly those changes can actually be applied to the underlying infrastructure. It is the opinion of this author that traditional architecture, like 3-Tier Architecture, creates added time and steps, along with potentially very costly upgrades that take even longer to implement, than a web-scale solution like the Nutanix Virtual Computing Platform where you can add resource capacity dynamically as needed.

Uncertainty

Like performance, uncertainty covers many aspects of a VDI deployment. There is uncertainty of what your users actually need, including the performance required on their VDI desktops, and uncertainty on what infrastructure will best provide the resources for those requirements. There is also uncertainty on how large a VDI deployment will ultimately become, and a large portion of this depends on how well the VDI deployment is received by the end-users, whether or not more users/departments are requesting VDI, the associated cost that is tied to scaling out the infrastructure of your current VDI environment, and what that means from a datacenter perspective (environmental variables like power, cooling, space, etc.) as well as management requirements (do they need additional FTEs to keep up with management/deployment tasks, time to delivery, ease of management, time to value, etc.). This list goes on and on.

The most prepared customers will perform desktop assessments before trying to deploy a VDI environment (as stated above). This provides details on how your users use their desktops, what applications are being used, as well as what kind of resources they are using from CPU, Memory and Disk IOPS. Taking this data helps in appropriately sizing an environment, however too often people try to size for the average and run into issues when the performance ebb and flow of their users trends up a little too far and performance starts to tank for all of your users sharing those resources. The hardest thing to size correctly for performance is shared storage, and this has traditionally been because of the storage architectures in use.

Traditional shared storage is network based and has one to two storage processors (also called storage controllers) to answer and deliver on storage requests. You can have any number of hypervisor hosts attached to this storage array over the Storage-Area-Network (SAN), and many VM workloads on each of those hosts. No matter how many hosts you add in, and how many VM workloads you run, the storage array has both a finite amount of disk IOPS it can provide, as well as a finite throughput and processing capability of the storage processors providing access to those disks. Even the newer “software-defined” storage arrays have this limitation, although most of these vendors have tried to “right-size” the storage performance from both a disk IOPS to storage processor performance, as well as what the network capabilities are. But these solutions, like their larger SAN cousins, do not generally scale out and are like mini-islands of storage. When I mention scale out, I am mainly referring to clustered, distributed file systems that can grow both dynamically (zero downtime or disruption to running workloads) and exponentially (with no actual limit). With either model the customer has to make some sizing guesses that may need to be forecasted out 3-5 years in advance. Not only is that hardware CapEx spend depreciating before you can fully utilize it and get to your target ROI, but it’s practically impossible to hit your goal with that forecast method.

Most of what I’m talking about isn’t unique to VDI either… it applies to virtualization in general.

Price

Finally, with those two described, we can understand the true price of VDI. There is the cost of the infrastructure as well as the cost of administering that infrastructure and the cost of agility to meet changing demands.

While I’m not going to talk about specific prices in this blog (that discussion should be had with a Nutanix Partner or Account Manager in your area), I am going to talk about some ideas you need to consider when calculating your VDI desktops. With traditional architecture, like I mentioned earlier, you generally have to size for your end goal in advance. If you don’t, then you either have forklift upgrades as your environment grows, or you wind up with a disjointed and extremely complex infrastructure to manage that is essentially a bunch of different environments. This may be a slight exaggeration, but I’m hoping you get my point.

If you size for the future, you are spending a lot of money for infrastructure that won’t be used up front, which means your cost per desktop is insanely high until you start filling it up. An example would be spending $500K on infrastructure and then deploying 100 VDI desktops to start. That would give you a cost of $5K per desktop while that is all you are running! As you deploy more desktops, the price per desktop starts to drop… but how fast are you going to get to that planned capacity of desktops that you may have originally forecasted for and ultimately used to figure out your per desktop cost? All the while, that infrastructure is depreciating as well as getting older.

If you size for different deployments and buy different infrastructure for each deployment based on a certain performance profile and density required for each deployment, you will get a solid price per desktop but your administration of all of those environments will become a nightmare.

And what do you do when things take a left turn in either situation? It only exacerbates the problems!!

What if I told you that you could start your infrastructure out small and grow capacity only as you need it, while maintaining the same administration and architecture methods at pretty much any size and know that your performance will not diminish as you grow. But wait… there’s more! 😉 It’s even better than that, because you can do all of this dynamically with zero downtime or disruption and in small cost increments that provide a known fixed price per desktop that doesn’t ever change based on scale. And did I mention a single user interface to manage all of this infrastructure? How about the fact that you don’t need to overthink where to put the various parts of your VDI desktops (I’m referring to replicas, deltas, and user data) because there is live tiering of where data resides based on actual usage at any given moment. Does installation & setup in about an hour for a brand new installation sound good? What about dynamically expanding your cluster in minutes when you need more resources?

Sound too good to be true? Well then you haven’t seriously checked out the Nutanix Virtual Computing Platform. The per desktop pricing with Desktop Assurance is simply icing on the cake. Sure, you could get all of these benefits, minus the guarantee, by just buying the Nutanix gear. But Desktop Assurance provides a guarantee that we will honor the promised performance or provide more hardware to make up the difference at no additional charge to the customer. You can’t get that anywhere else.

I have seen some pretty wild and far stretched claims on density numbers from the competition that get the forecasted price per desktop very low. However, I have never seen any of these promises delivered on, so the actual price per desktop isn’t realistic and is always more than promised in the end. Knowing that you can bank on a given price per desktop makes things very predictable, and finance and procurement people love that.

So, performance is something that we’ve already worked out the math on and are willing to guarantee or we’ll provide more hardware at no additional cost to meet the guarantee.

As for uncertainty, because you can add packs of desktops as you are ready to deploy more, which include the required hardware to run them, you don’t have to forecast what your end goal is. Buy what you need when you need it and pay as you’re ready to grow!

It really can be that uncompromisingly simple. #NutanixFTW

Leave a Reply

Your email address will not be published. Required fields are marked *