I'd probably take that approach with any highly virtualized workload, whether it's heterogeneous (cloud) or not. Cheap, generic, easily replaceable (and by extension disposable) processing hosts is one of the big selling points of things like vSphere Enterprise's vMotion / DRS.
The last thing you want is severe performance degradation as your failure mode; better to migrate everything off. I've run into a similar situation where a particular host had hardware issues that caused all the network ports to negotiate down to 100MBps. Needless to say we migrated everything off ASAP and took it down for maintenance. The problem is that most monitoring tools only check if a given link is up or down, not that it's running at the speed it's supposed to be.
A lot of legacy workloads don't work well that way. If you're running fully virtualized legacy servers, you can get into all sorts of problems with affinity (restarting one part of a server farm and not the rest in an out of sequence order), or systems that take forever to restart and need to have scheduled windows of downtime (think: monolithic SQLserver).
Also, you can't vMotion off of a server where all the ethernet trunks are dead. You can DRS restart of course, but that's where the above problems come in. Also, a DRS restart is functionally equivalent to an unplanned powercycle: some apps don't like that or don't recover well from that, and require humans to help the process. So doing an unplanned DRS in the middle of the night could bring the system down completely until human intervention intercedes. If you still have access to storage, it could be better for a human to gracefully shutdown the system and then restart it on a good node.
Apps designed by cloud-minded people don't have these problems (unless the people are idiots). But more "Enterprise" based applications often do.
Mindset can create interesting problems. For example, most POS (point of sale, not the other kind of POS) vendors still have the mindset that retail customers are cheap bastards that will spend the absolute minimum on network infrastructure (often true) and thus presume absolutely flat, simple networks will be the environment they will deploy in. So I ran into a vendor that made a system for a client that made the presumption that their systems and the POS system would always be a) on ethernet and b) in the same broadcast domain. So they actually
made up their own ethernet datagrams to send pseudo-broadcasts between the systems that were impossible to route and most firewalls didn't even recognize. Which made it difficult to get those from the POS server to where the systems were, which was firewalled off of the POS server for PCI security reasons.
Why do this horrible thing? "Because it always worked before" was the only answer I could ever get. That doesn't really explain why you'd take the trouble to reinvent the wheel as a pentagon, but it happens all the time.