What a can of worms this topic seems to be. I suspect it’s mostly because of the stepping on toes that VMware environments cause to medium-to-large size organisations, i.e. those that are big enough to have separate network and server staff. If you have separate network, storage, server, DBA, security staff/teams then all hell breaks loose as toes get crushed left, right and centre.
Why is this? Because of the way that organisations and technology have operated in the past:
A “server” was a box running an operating system, which in turn ran one or more applications, including databases and Antivirus software. This box was plugged into a network switch and either had direct-attached storage (aka DAS, be it internal or external disks), or more recently a connection to a network-accessed storage system (e.g. Fibrechannel SAN fabric, iSCSI ethernet, Fibrechannel-over-Ethernet). Lets’s not forget the Remote Access Controller in the server which would also have a network connection. You might also have a network-connected KVM (keyboard, video, mouse) device.
The “Network”, meaning Ethernet for (usually) non-storage data, was probably one or more cables plugged into something like a Cisco switch, probably into ports configured as access ports (i.e. not VLAN trunks). This switch would then be connected to a router, or would also be a router, which would have cables into a firewall, which would segregate this internal network switch/router from the outside world and/or any switches plugged into other ports on the firewall device that were providing DMZ services. Alternatively you might have an external firewall, behind this a DMZ then a second firewall and your internal stuff.
The storage data, if not DAS, would probably be running on its own network, be it FibreChannel or ethernet. But you might be piping your iSCSI/FCoE data over a separate VLAN using your existing ethernet topology. You probably wouldn’t have a separate storage device for your DMZ servers as you’d be relying on (trusting) the storage security/partitioning feature(s) of the storage infrastructure to keep things separate, e.g. SAN fabric zoning, SAN LUN masking (e.g. Storage Groups in the case of an EMC CLARiiON), iSCSI (mutual) CHAP, iSCSI intiator/target names, IPSec.
You’d probably have all your remote access devices (e.g. DRAC, iLO) on the same network/VLAN. Likewise for network KVM. No separate networks for internal/DMZ servers. These devices might be internet-accessible.
A few places might have a completely separate network internally, e.g. for development or extreme high security purposes, that was not physically connected to anything other than its own dedicated network switches and dedicated storage systems. Even this though might have remote access controllers/network KVM accessible via some shared network. Perhaps a few times a day somebody would manually dump some subset of data from a system and copy it over onto this separate system via a portable storage device of some kind (e.g. CD/USB storage).
So lets kill off the expressions of “air gap”, “physical separation”. At some point in most networks, internal and DMZ servers would be connected together in some way. Be it via a firewall device, storage network, remote access controller or network KVM.
When people talk about physical security with reference to virtual servers, what they tend to mean is “an external CPU with its own code doing the packet shifting/analysis connected via ethernet interface”.
If they only trust a separate ethernet firewall device then you need to separate everything else out too. For example, from the DRAC on a Dell PowerEdge server I can not only power the server on and off, but I can mount ISO images and UNC paths to the server OS, gain access to the server console, and more. Let’s not forget about the vmkernel that’s used to link the host into vCenter, and that lives on a vSwitch, so you’d better have a separate vCenter for your DMZ too. And a separate SAN, or are you telling me that you actually trust the SAN fabric security or LUN security features in that big expensive box of disks?
More likely, somebody at your organisation (you?) is stuck in the past thinking “but this box looks like a server, and a server must never have both an internal network and a DMZ network plugged into it”, forgetting (ignoring?) that a “server” is just electronics that runs code, and not all code is the same. Especially if the so-called server is running some code that implements a layer 2 ethernet switch, yet also happens to have significant grunt and so runs lots of other code as well (i.e. virtual machines).
“Oh but the Cisco/Juniper/whoever’s xxxx firewall has never been hacked/breached”. Neither has there ever been a breach between vSwitches or virtual machines on a (correctly configured) vSphere host. Please correct me if I’m wrong.
And in any case, what kind of risk are we talking about? Denial of service? Data capture? System hijack? These all require different levels of skill, vary widely in how easy they are to pull off, and can only partially be overcome by off-host network security devices anyway.
Probably close to 99% of malware infections/data breaches are on internal networks, mostly carried out via company employees (plugging in infected laptops, USB storage, visiting dodgy websites, installing software containing trojans, burning data onto CD/DVD, emailing data, social media etc. etc.).
Thus an up-to-date, sensible and complete approach to data security is required, network technology alone will not save you. That’s not to say that the network itself shouldn’t be designed “properly”, but splitting hairs over certain aspects of this will do you no good. In fact, by making things harder to manage you often end up with an overall less secure environment over time.
Let’s also not forget that vendors love to talk up perceived risks, they’re one of the best ways to get you buy more stuff – and overcomplicate your environment in the process.
Here’s a diagram showing the cables going to a typical ESXi host:
Of these, which would you “separate” from your internal hosts if you had dedicated hosts for your DMZ VMs? These ones maybe?:
So you trust the vSwitches enough to connect together your IP storage traffic, and your host management traffic is going to the same vCenter server? Same network for the RAC, FC storage? I would suggest one word if the above is approved by the person who’s making you have separate hosts for DMZ VMs: Fail. What they should surely be requesting is the following:
Now, I don’t know about you but that says several things to me: “pain in the neck to manage”, “overkill”, “expensive”. And if you’re still ultimately relying on a physical hardware firewall to separate the IP networking into internal and DMZ then it’s actually mostly still all connected together anyway.
What I’ve been doing successfully is to have separate vSwitches/1Gb uplinks for:
- vCenter/vMotion (two NICs, set to active/standby such that ordinarily the vCenter and vMotion tarffic travels over separate NICs, separate VLANs)
- Internal VM networks (multiple VLANs/port groups and multiple NICs – currently three)
- Three different DMZ VM networks (each on a separate vSwitch, with its own NIC connected to a switch port configured to access mode)
(All my storage is FC.) Note that the above requires me to use 2 + 3 + 3 = 8 1Gb links plus 1 for the RAC (which I’ve got into some older 100Mb switches). But translate this into 10Gb ethernet and it gets difficult and/or expensive, you need to start thinking about combining things more.
Comments on any of the above please! If I’m wrong on something, tell me. If you agree with me, tell me. It seems a shame that the official guidance from VMware is several years old. I take that to mean that it’s still valid but it’d be nice if it was reviewed/revised even if that just meant a more recent date stamp.