Except that's not cloud computing. Lots of servers sharing resources is clustering. Dynamically distributing load to clusters is part of what virtualization offers, along with fast provisioning (cloning) to scale to accommodate load. The industry has been doing all of those for much longer than the term 'Cloud' has been in vogue.
I find that wikipedia article amusing because it has a lot of self-referential terminology that describes the idea of 'cloud computing', but little to no details on how it's actually implemented. The concept is as nebulous as the name implies, and to those of us who actually work with the technologies every day it's nothing more than a pointless marketing catchphrase. Especially when people talk about 'private cloud', I scratch my head and say... "Uh, you mean datacenter? Yeah, we've got one of those."
Though, to be accurate, you can't unconditionally state that cloud services are all hosted on clusters of many servers. You don't know that, you can't know that, the whole point of 'cloud' is that you don't care. A cloud provider sells (well, leases) you a service. That service might be hosted on a state of the art geographically distributed cluster of hundreds of servers. Or it might be on a single Dell in a closet because you're the only subscriber for that service. It's up to the provider to figure out what technology to use to meet the requirements they promised as part of the service. That's why I say that 'cloud' is merely a politically correct synonym for 'outsourcing'.
Every time I go away for a while and come back, I end up reading about how someone has rearranged the solar system and computer technology took a giant leap sideways.
Just because this is something I've talked about recently (like in public, on a stage, not here) and I don't care if I'm replying to future people posting from jetpacks, back in 2000 someone asked me what "cloud computing" was and I said pretty much the same thing: its a marketing buzz word to sell you more hosted services.
I've changed my tune a bit, in that while it might have started off that way, the mindset of cloud computing has generated real differences, particularly in the mindset of "run anywhere, on anything, at any scale." Even though the pieces are the same and the technology is the same, there are genuine differences in how cloud computing clusters are designed, and even in how cloud computing people think. A few years back I was bouncing cluster ideas off of a guy I know that is more of a cloud guy, whereas I was at the time more of a virtualized hosting person in mindset, and we got into a weird discussion about network reliability. It basically went something like this: we were talking about rack servers with multiple 10 gig ethernet trunks. I mentioned in my systems I designed the network to use the 1 gig ethernet ports as emergency backup to the main 10 gig trunks in case of a failure in the 10 gig segments. He said he didn't bother to connect those at all, even though he had them and there were lots of ports to connect them to. My thought was: use them, it can't hurt. His thought was: don't use them, they can only hurt.
To me, having an entire server network-partitioned meant all the load on that server went dead: slow was better than nothing. To him, having an entire server dropped down from 20 gig trunked to 2 gig meant that load was hyperconstrained: better to fail the node and let the workload migrate or restart on a fully operational node than be crippled.
Take that conversation, multiply by about a thousand, and that's the difference between cloud computing and conventional computing. It starts off as mostly a philosophical difference, but then that philosophy gets turned into implementation, and pretty fast the cloud guys are living in a completely different world. But only if the entire stack buys in, from the applications upward. Otherwise, its still marketing. But in a clusterized containered world with virtualized storage and appropriate switching, it can be night and day.