How Custom Data Centers Enable True High Performance Computing (HPC)

    600 400 1547 Realty

      RaymondAdams_IMG_9214-600x400Today’s High Performance Computing (HPC) Environments are highly dependent on custom hardware, power and cooling specs. An outmoded data center facility cannot measure up to the requirements of big data and other intensive HPC applications.

      Working with a custom data center design firm can help your organization decipher how to update or replace your legacy data center to meet new HPC standards for power, cooling, up-time and other design factors.

      “But my existing data center is pretty efficient, and my hardware is still warrantied!”

      That may be true, but you need a facility that is ready to grow for the future while also meeting the strikingly different needs of HPC. By adopting modern power designs, cabinet densities, and network standards, you can anticipate applications and increased data center use for years to come. Here are some of the ways modern data centers enable true high performance computing.

      Power Densities & Subsequent Cooling

      Today’s HPC environments carry some of the highest power densities we have seen in the data center space, reaching upwards of 90 kW per cabinet. Most of these deployments are clustered into pods, creating significant hot spots. This increased density brings an even greater need for careful planning of your cooling systems and airflow distribution, as the increased heat and exhaust from more and more equipment in each rack can result in significant strain and higher temperatures.

      There are several categories of cooling that can meet high performance needs. Free air cooling is very energy efficient and fits alongside the modern recommendations for higher operating temperature on the data center floor. Basically, an indirect (filtered) or direct air handler pumps outside ambient air throughout the data center. Interior air is either passed through heat exchangers with plain old water, or else pushed directly outside. Humidification controls are often necessary when using outside air.

      Closed loop chilled water cooling is one of the most common cooling methods and uses water pumped through pipes and a refrigerant solution in order to exchange heat. With a closed loop system, chilled water or a refrigerant is recirculated at a constant temperature from a chiller unit through the data center and back. Glycol cooling systems are forms of loop systems where glycol is the refrigerant solution. It is pumped through heat exchangers and able to be cooled below the freezing point of water.

      More complex and heavy use deployments require water to the server core or oil based cooling solutions, where specialized equipment is used to submerge the servers themselves in a cooling agent, either water or specialized oils.

      Power Availability & Distribution

      Along with higher power densities, intense high performance computing environments require more servers and supporting equipment in general. That drives up the amount of power needed. With power requirements often in the multi-megawatts or higher, the availability of a robust power infrastructure is important to successfully deploy your HPC environment. Existing data center campuses, even with power utilities deployed as recently as five years ago, may not be sufficient to meet today’s requirements, let alone your 5-10 year expansion plan. A custom designed data center can include the opportunity for onsite substations, areas for future generators with pre-poured concrete and hookups, infrastructure for additional transformers, and other features that enable a future expansion to add more MW capacity.

      Power distribution requirements of today’s HPC environments are as complex as the power density requirements. HPC distribution requirements range from 480v direct utility power to 208v 3 Phase 50 amp feeds. This configuration may be backed by an Uninterruptible Power Supply or not, depending on the deployment.

      At the same time, your legacy systems and servers that don’t need the same levels of power must also remain powered, often in the same facility. The ability to deliver multiple types of power to a HPC environment in addition to your traditional networking and storage environments under one roof is imperative.

      Connectivity

      With the massive data sets required in today’s HC environments, the ability to share and store is almost as important to the information that is being computed. Networking must be seamless, meaning access to high capacity connectivity is vital. Yesterday’s 1 Gig circuits have been replaced with 100 gig and the ability to reach multiple locations, with different providers, alongside the lowest latency possible.

      Data in the petabytes is a common occurrence. As data volume continues to increase, network design for HPC must be reconsidered. Legacy server loads, with storage and servers virtualized, focused on aggregation. Instead of jumping to 10GbE, you might need to go straight to 40 or 100 Gbps. A 1MB buffer is a good idea. Burst loads are common in these environments, so fabric network architecture is a good strategy to consider, as they are easier to scale and have less client-server traffic.

       

      Ultimately, a HPC computing environment has many design considerations that are different from a standard data center, and only a custom consulting firm can help you make sure your power, connectivity, redundancy, and cooling meet your needs and anticipate future growth.