I have some Dell R710 servers that have been running the free edition of XenServer 5.5 since 2009. They have 48GB RAM and two Xeon X5550 CPUs, and host five virtual machines, all running XenApp. This was done as part of a physical server consolidation plan, and I sized the VMs such that they used as much RAM as possible whilst allowing the minimum for the XenServer hypervisor. The VMs all had four CPUs and, by experimentation, one had 7000MB RAM and the other four had 9728MB.
This has been working fine ever since but I’ve recently been wanting to automate the (re)creation of these VMs and bring the hosts into line with my forthcoming Windows Server 2012 Remote Desktop Session Host replacement which will be running on Windows Server 2012 Hyper-V.
So, I know how to squeeze those VMs onto a XenServer host, but how to do it on Hyper-V? Things get interesting as I’d also like to take advantage of a small performance boost due to Hyper-V being NUMA aware, whereas XenServer 5.5 is not.
So, I did some research and found some VM RAM sizing information that seemed logical. However, due to NUMA, five VMs onto two NUMA nodes doesn’t go. Consider:
Host has 48GB RAM, 24GB per NUMA node (i.e. CPU socket). You can’t fit VMs using 7000MB, 9728MB, 9728MB, 9728MB, 9728MB onto the server, you always end up with one VM that won’t fit onto one or other of the 24GB NUMA nodes.
This wasn’t a problem with XenServer 5.5 as, not being NUMA aware, it just saw the 48GB RAM in the host as one big lump and distributed the VM CPUs wherever it liked (and I lived with the small performance hit and wasn’t too fussed). But now I can do something about it, because I have a NUMA aware hypervisor, and it’d be wrong to ignore NUMA. (even though I could just turn on NUMA spanning for the host and VMs, but that feels a little defeatist!)
So, I decided the chop the small 7000MB VM into two smaller VMs, as then I could run my NUMA nodes with a 3500MB VM plus two 9728MB VMs. But based on the sizing info, and to allow for OS overhead whilst maintaining similar RAM for user applications, I decided to give the “smaller” VMs 4000MB RAM, and tweak down the RAM on the “larger” VMs, I also gave the “smaller” VMs two rather than four CPUs – the load will be spread across them by XenApp so there’ll still be four CPUs in total servicing their applications/users. So now each of my two NUMA nodes/CPU sockets should have the following sized VMs on it: 4000MB + 9500MB + 9500MB.
The RAM overhead should be 32MB for the first 1GB RAM, plus 8MB per additional 1GB, so about 55MB for a 4000MB VM, and about 98MB for a 9500MB VM.
With 48 * 1024 = 49152MB in the host that leaves 49152-((4000+55)+((9500+98)*2))*2 = 2650MB for the host OS. Which should be fine.
Except it’s not, for two reasons.
One: NUMA. I powered on five of the VMs, and the sixth refused to power on due to no enough RAM being available. I had to make it really quite small to get it to power on. Why was this? Hyper-V had placed the two smaller VMs on the same NUMA node, plus one of the larger ones, and there’s not room for three larger VMs on one NUMA node. How can you tell which NUMA node your VMs are running on? On the host use the Hyper-V VM Vid Partition\Preferred NUMA Node Index performance monitor counter, or run the following PowerShell command line:
(Get-Counter -ComputerName "HVHost01" -Counter "Hyper-V VM Vid Partition(*)\Preferred NUMA Node Index").CounterSamples | Select-Object -Property InstanceName,CookedValue | Where-Object -Property InstanceName -NE "_total"
Hyper-V does allow you to specify which NUMA node a VM will run in, but it’s a little fiddly.
Two: The RAM sizing information doesn’t seem to work. Thus I’ve ended up with my VMs sized as follows: 2 x 3850MB and 4 x 9100MB, and configured to run on specific NUMA nodes as above so that they are guaranteed to fit.