I’m currently designing a system using the (still fairly new) System Center Virtual Machine Manager 2012 SP1 and Windows Server 2012 Hyper-V hosts. The hosts are old HP DL580 G5 servers with four X7350 processors (quad core, 2.93GHz, no hyperthreading or EPT-D/SLAT support).
They don’t support NUMA. The easy way to check for NUMA support on Server 2012 is to open Task Manager, go to the Performance tab, click CPU on the left, and on the graph(s) on the main pane right-click and go into the Change graph to menu. If NUMA nodes is greyed out then you don’t have NUMA available on your hardware:
Or you can use CoreInfo with the -n option which will tell you:
Coreinfo v3.2 - Dump information on system CPU and memory topology Copyright (C) 2008-2012 Mark Russinovich Sysinternals - www.sysinternals.com Logical Processor to NUMA Node Map: **************** NUMA Node 0
i.e. all the CPUs (represented by a *) are mapped to NUMA node 0.
(For reference, the CoreInfo results from a newer server that does support NUMA would look like:
Logical Processor to NUMA Node Map: ********-------- NUMA Node 0 --------******** NUMA Node 1
from a Dell R710 with 2 x X5560 CPUs).
I’d created some Server 2012 VMs a few months ago and know that I’d been able to migrate them from one of the DL580 hosts to another with no problems. Then a few weeks ago I created some new VMs and just recently tried to migrate them – they wouldn’t move.
I got an error from the Migrate VM Wizard at the Select Host stage, in the Rating Explanation tab:
The virtual machine requires NUMA spanning to be disabled and the host either has numa spanning enabled or does not support NUMA.
So what was different between the new and old VMs? On digging around in the VM settings via PowerShell I found the NumaIsolationRequired property was set to true on the VMs that wouldn’t migrate, but wasn’t set to anything on the ones that would move (it was present but blank, i.e. neither true or false). The older VMs were created using SCVMM2012SP1 beta, and then moved to the release version, so perhaps that explains it.
The newer VMs had been configured to say “I must always run within a NUMA node”, but the hosts don’t support NUMA – and SCVMM isn’t of the mind to say “ok, well I’ll just run you anyway, seeing as I effectively just have one huge NUMA node, so you can never run on a different one”. But what to do about it?
There seem to be two options: Change the hosts or change the VMs.
Change the hosts
- Via SCVMM, right-click the host and go to Properties, Hardware, CPU and untick the box Allow virtual machines to span NUMA nodes.
- Or, via Hyper-V manager, select the host in the left pane, then choose Hyper-V Settings… under Actions for the host in the right pane. Got to NUMA Spanning, and untick the box Allow virtual machines to span physical NUMA nodes. You’ll then be told that you need to “Restart the Hyper-V Virtual Machine Management service to apply the changes”:
Interestingly, making the change via SCVMM doesn’t prompt you to do this.
Change the VMs
Option 1: From SCVMM, shut down the VM then right-click it, choose Properties, Hardware Profile, under Advanced click Virtual NUMA. Tick the box Allow virtual machine to span hardware NUMA nodes.
Option 2: Using PowerShell:
$VM = Get-SCVirtualMachine -VMMServer RCMSCVMMServer -Name RCMDevVM9 Set-SCVirtualMachine -VM $VM -NumaIsolationRequired $false
Further, the VMs have this option set because it has been configured in the Hardware Profile that the VM was created with, so you might want to change that – in the same way that you’d modify a VM above.
I’d like to point out that as a rule of thumb you probably don’t want your VMs to span NUMA nodes: NUMA spanning allows a process executing on a CPU in one socket to access RAM attached to a CPU in a different socket – which isn’t as efficient as keeping the RAM and CPU local to one another. But in a dev environment and/or if you’re using older hardware you might need to fiddle with the settings as I’ve had to above in order to enable live migrations.