Keep Your Data Center Design Flexible to Avoid Operations Problems

    150 150 1547 Realty

    server-row-1Designing a data center facility and planning for capacity and daily operations is a difficult task. Even after following best practices, using professional consultants, and carefully vetting power, space, and cooling requirements, the best-designed data centers often still need some jury-rigging to work around design constrictions. This can negatively impact daily operations in minor or major ways.

    Operators at our one of our data centers recently installed a cage for a colocation client, but because the space was designed for data center pods (which are themselves individually secured), adding the cage around the pods has made it difficult to access neighboring pods from the front. Because of this, engineers are using the racks next to the cage for the company’s own infrastructure instead of any new colo clients.

    However, failing to account for power requirements or cooling as facilities expand and densities increase can lead to catastrophic failure or downtime as well. Once you reach commissioning stage, you almost certainly will have to work around some building foibles—that’s just the name of the game. But data center designers can minimize problems, starting in the design phase.

    Contractor Shortcuts

    Builders and contractors are on a budget (for both time and money). A good firm wants to do good work, but the reality is, corners are often cut in the process of building or retrofitting a data center facility, especially when the construction timeline is short.

    A recent discussion among System Administrators online revealed a surprising lack of truth in claims of redundancy and resiliency, with a backhoe taking out a fiber bundle and causing downtime in a facility with supposedly redundant networking. Single points of failure like this should be tracked down in the plans and remedied (see under-engineering below).

    In another example overheard at a conference, the contractors built the ceilings in white space shorter the design document specifications. Nobody noticed until they decided to test taller, denser racks in the next build out. The racks didn’t fit.

    Over or Under Engineering

    With server racks getting more dense every few years (modern blades are 84x denser than the floor mounted servers of the ‘90s—imagine the data center of 2025!), the rest of the facility must be prepared to support an ever increasing power draw. It takes approximately 400 watts per square foot to power blade servers, plus additional cooling to deal with more chips putting off more heat in the same amount of space.

    Cooling and power infrastructure takes up space. If you’re building a data center for the next decade, it needs to be expandable for both modern and future technologies. Always be looking ahead, or you risk running out of room for support infrastructure. Server rooms might have more computing power in the same area, but they’ll also need more cooling and power to match. UPS systems, CRACs, and air handlers are sizable pieces of equipment that have to live somewhere.

    Oppositely, a data center can also be overengineered. You need to stay flexible enough to adapt to future technologies and best practices. Being able to switch your floor layout, or route cooling to different rooms, or add an additional power circuit—these might not be possible if your data center is too perfectly designed. While that approach might lead to great efficiencies today, it might be the equivalent of shooting yourself in the foot down the road.

    Planning for Flexibility

    Nobody knows what the future holds, but you can take some actions when designing your data center to future proof it at least partially.

    • Route data and power cabling through different paths and try to leave an easy method of rewiring cables. Data centers without raised floors are often easier for cabling as it runs overhead.
    • A lack of raised floors can also make white space easier to repurpose if you need to scale down a data center.
    • Use freestanding PDUs as possible, with power coming in from the room, rather than up against the wall.
    • Free cooling and other efficient and/or modular systems are simpler to scale and move.