Today I was approached by one of my coworkers and was asked,
Why do you only cluster servers with the same amount of RAM together? Does it really matter?
This provoked me to write a little bit about this. I would target this more for management who like to buy servers one at a time and don’t have a firm grip on VMware HA Clusters.
Why does it matter to me? An easy answer to this would be the ease of management and it is best practice. When you have servers clustered together that are the same you don’t have to think as hard when it comes to the design regarding the cluster and failover. Before I dive into this, let’s discuss what a cluster is in VMware and what you gain from them.
A cluster can be defined as a pooled set of resources. More simply put, you have all your resources “clustered” in one set that can be shared accordingly. Now this statement is both true and false. Yes, you pool your resources together and no, they aren’t technically shared in the sense that they aren’t available to VMs that are on different hosts. Meaning, a VM on Host A can’t use a CPU on Host A and a CPU on Host B. Make sense? A better way to think of it is resource aggregation. VMware defines clustering as,
Now that we have an idea behind the definition of a cluster let’s look at what clusters are used for. The most common uses for a cluster in VMware are to take advantage of HA and DRS. To tie back to the original question, there are caveats that must be taken into consideration when adding hosts to a cluster. As a best practice we must try our best not to have hardware with different specifications in the same cluster. If you add hosts with different specs into the same cluster you end up with an unbalanced cluster. What exactly is an unbalanced cluster? Duncan Epping and Frank Denneman’s book, VMware vSphere 4.1: HA and DRS Technical Deepdive, defines this as,
When you have an unbalanced cluster the Admission Control Policy calculations will not be what you might expect. When Admission Control is enabled the worse case scenario is taken into account. Which means, that when the calculation is made for available slots, the largest host is essentially left out of the equation. This is in the case of setting your host failures to a single host. Due to this, the best way to make sure you have setup a good cluster is to deploy machines with the same amount of resources for both memory and CPU.
This is not a full blown write-up regarding HA Clusters because I don’t think I need to reinvent the wheel when it was written about so well by Duncan Epping in his and Frank Denneman’s recent technical deepdive book. This was definitely enough though to explain my answer to the question I was asked by my coworker. If you would like to learn more about VMware HA then jump over to Duncan’s blog, Yellow-Bricks, and read up on all of the HA wonderfulness!