Remember all that talk about the $20m server? You know, the one server that tipped you over the edge to having to build a new data center facility since the existing one was now too small? Seems to be so far in the past, doesn’t it?
As we’ve gone through the steps of application rationalization and consolidation, and on to a more virtualized IT platform, it’s far more likely that you are looking at a small pile of IT equipment sitting pathetically in the middle of an over-sized data center facility. Sure, you may be wondering how you are going to get enough power to that pile of ultra-dense equipment, but space is probably not that high on your priority list any longer.
Indeed, as you find that the march of shadow IT across your organization is leading to more of the platform disappearing into the public cloud, you may be wondering why you bother at all. Was Nicholas Carr right — is IT dead?
Well, no, of course not. IT is morphing into a different beast (which is what Carr was on about anyway). While the degree to which business depends on IT will continue to increase, it is now going to be implemented and used in different ways. This means that any facility that houses IT equipment owned by the organization will need to be designed and managed differently.
For anyone looking to build a new data center for their own use, this may be a major problem. Historically, there was only one thing that seemed certain about a data center facility: at some stage you would grow out of it. Now that is no longer the case, so trying to create a magic facility that can grow and shrink with the amount of equipment inside it may be beyond the capabilities of your organization’s facilities management team.
Far better to take a long, hard look at what you are running in your existing data center and figure out what you will need to run in an owned facility for the foreseeable future — even if that “foreseeable” is only around 12 months. You also need to ask yourselves, “What else do we want to run on owned server, storage, and network equipment, but does not need to be put into an owned facility?” Great — that can all go to a co-location facility. This provides the grow-and-shrink flexibility that you will need. And in addition to the flexibility factor, it’s no longer your problem — someone else has to build the facility, implement connectivity, power distribution, cooling, auxiliary generation, negotiate energy prices, and all the rest of the grunt work that goes with owning a data center. You get a nice clean area where you can put your equipment and off you go.
Set up in this way, when you need extra functionality it simply becomes a case of deciding whether it should be provided from within the owned facility, the colo facility, or whether that wonderful public cloud thingummy is a better place to source that function from.
If the public cloud is chosen, then not only have you shrugged the responsibility for the facility on to someone else’s shoulders, but you may also have done the same with the whole hardware and software stack as well. Of course, there is the full spectrum of cloud services out there that allow you to decide what you own and what you don’t — infrastructure as a service (IaaS) still leaves you with the whole software stack; platform as a as service (PaaS) with just the application; while software as a service does away with everything apart from integrating the system into your other enterprise systems.
So… the key to the next generation of data centers? It’s a hybrid mix. On the whole, it will move to a mix of private cloud in an owned facility alongside private cloud in a colo facility, using functionality as required from the public cloud. The new data center is going to be hybrid cloud…
Image credit: Wikimedia Commons