One complaint that I’ve had about mixed storage technology (i.e. SSD and HDD in the same box/same LUN/volume/store) is that an application’s performance can’t be guaranteed to be consistent.
For example, if a database has been happily running from SSD with nice low latency and high IOPS but then due to you cloning a load of new VMs, provisioning another system or some other storage intensive operation gets shoved down onto HDD, your database performance might well suffer quite considerably, this is sometimes referred to as the noisy neighbour scenario.
Conversely, if the database has been running consistently on HDD and then gets promoted to SSD, you might see not only a significant performance boost to the database, but also a significant CPU increase on the host, which might have a knock on effect to other VMs.
Note that most of the time you won’t see this type of effect; the tiering process is done very intelligently and especially in the case of Tintri, probably about 95%-100% of your IO will be coming from SSD permanently – and with Tintri, 100% of write IO always goes to SSD.
But knowing that still doesn’t stop people worrying about fluctuating performance.
I was talking to Tintri about this at IP Expo last October, and whilst they had a feature that allowed you to guarantee a minimum level of IOPS for a VM (note: “VM” not LUN or datastore as with legacy SAN) they couldn’t limit the maximum IOPS. This has now changed and they provide full QoS (Quality of Service) to allow both maximum and minimum IOPS to be set on a per VM basis. There’s a datasheet that provides full details on how it works.
This ties in with the Tintri per-VM IO queue, whereas most other (particularly) LUN-based storage systems queue IOs per LUN or per datastore.
There’s a good video showing how it works here: https://youtu.be/BC2OvYWeknI