There is a saying that anyone who believes that exponential growth can last forever in a finite world is a madman or an economist. I would add the data center executive to this group.
And who can blame us? The data center industry has been growing exponentially since its inception, while the recent pandemic ushered in record-breaking demand for data. However, we are nearing a point of diminishing returns for expanding capacity and would be better served to invest in efficiency.
In this article, I will explain how data centers can improve their bottom line while enhancing reliability, performance, and efficiency.
The Case for Efficiency
More than any other industry, data centers rely on electricity, which constitutes as much as 70% of the total operational cost. Data centers in the USA consume around 90 billion kilowatt-hours annually, roughly 2% of the national consumption. This increase in demand is rising prices, while the extreme reliance on electricity makes data centers very sensitive to market fluctuations.
There are also external factors making energy a finite and expensive resource:
- Climate change will increase global energy demand by 25% to 58%.
- Electric vehicles are increasing the demand for electricity and putting pressure on the electric grid.
- High natural gas prices are raising electricity prices.
The data center industry is at a crossroads and need to reduce our energy footprint. The only way forward is to focus on sustainability.
Exploring Technologies for Improving Data Center Cooling & Power Management
Up to 40% of a data center’s energy consumption goes into cooling and ventilation, making it a crucial area to address.
Let’s delve deeper into the topic of advanced cooling and power management.
As electrical current flows through the circuits, they create heat which the cooling systems must dissipate to prevent damage and ensure servers operate at an ideal temperature between 70° and 75° F.
Here are some innovative energy-efficient cooling technologies in use right now:
Aisle containment is the pinnacle of mechanical cooling technology and involves using containment panels to create a physical barrier between the cold intake air and hot exhaust air. Preventing the two from mixing can lower cooling costs by 5% to 10%.
In a contained system, server racks are arranged in rows or aisles, with the cold air intakes facing one direction and the hot air exhausts facing the other, making one side of the row hot and the other cold.
Two-stage Evaporative Cooling
Evaporative cooling brings hot, humid air into contact with a cooled surface, causing the water in the air to condense and release heat.
Two-stage evaporative coolers combine two cooling stages, an indirect stage, and a direct stage. In the indirect stage, the air is cooled by contact with a cold surface rather than being cooled directly by the evaporation of water. The cooled air then passes to the direct stage, where the air is cooled further through direct contact with water droplets.
The evaporative cooling method requires significantly less energy than traditional refrigerant-based cooling systems, and the water can be reused, reducing its environmental impact. Indirect evaporative cooling is exceptionally efficient in regions with moderate to dry climates and ample access to water.
Optimized Site Selection
Data center location and climate are becoming crucial in running an efficient operation. Building a data center in a cool and dry environment facilitates better heat transfer, reducing the need for extensive air conditioning and ventilation.
Suitable climates reduce strain on cooling systems and lower the risk of system failures, making the data center more reliable. Also, electricity, materials, labor, and transportation costs vary geographically, making some locations more attractive.
When choosing an optimal location for a data center, you should consider the following factors:
- The availability, affordability, and reliability of electricity.
- Low risk of weather events and natural disasters.
- The affordability of high-quality construction.
- The presence of telecommunication infrastructure.
- The availability of qualified personnel.
Liquid cooling can significantly enhance performance compared to conventional air cooling because liquids have a much higher heat capacity than air.
The most widely used liquid cooling method involves replacing traditional fan coolers with water blocks and connecting them to a broader liquid cooling circuit. This approach is similar to water cooling a desktop computer.
Direct-on-chip (direct-to-chip) liquid cooling involves using flexible tubes to bring non-flammable, dielectric fluid to the processing chip directly. The fluid absorbs heat and transforms it into vapor, transporting heat from the IT equipment.
In addition to the basic water-cooling approach, more advanced methods are available. For example, Microsoft built a wholly submerged data center. This underwater data center consists of a group of standard servers housed in a waterproof container, with heat being dissipated into the surrounding water, which helped to maintain stable temperatures. The sealed environment and low temperatures led to a significant increase in reliability.
Immersion cooling is another liquid cooling system in which servers are submerged in a thermally, but not electrically, conductive fluid that comes into direct contact with the equipment and cools it.
Immersion cooling is highly efficient but also more straightforward than mechanical cooling, as it has fewer moving parts. Immersion cooling also enables high-density applications that would have been difficult to cool with air, thus requiring less data center space. Remember that a data center with a more efficient cooling system can house more servers without overheating problems.
As data centers consume more energy, power management has become critical to their operation. Let’s take a look at a variety of strategies and technologies to optimize energy usage.
Direct Current Power Systems
It may seem counterintuitive, but DC power has several advantages over AC power in data centers:
- DC power conversion is simpler, requiring less space and equipment. A DC-powered data center can be smaller and more efficient in terms of both space utilization and equipment maintenance costs.
- The power quality is better in DC power systems, and there is less power loss than in AC systems, which lose power every time the current changes direction.
- With DC systems, battery strings can be added over time as the load increases without changing the existing architecture. This allows for faster upgrades and installations as the facility grows.
- DC power systems can more easily integrate with other energy sources, like solar panels and fuel cells.
Smart Power Distribution
Dynamic voltage and frequency scaling (DVFS) controls and adjusts the operating voltage and frequency of the central processing unit to save energy and reduce power consumption. DVFS can help reduce the energy consumption of servers by dynamically adjusting the voltage and frequency based on the workload demand, leading to reduced power usage when servers are idle or have low workloads.
By reducing the voltage and frequency of the CPU when it is not under heavy load, energy consumption and heat generation can be reduced, leading to longer life, lower cooling costs, and more efficient use of energy.
The Future of Data Center Cooling and Power Management
Several cutting-edge cooling and power management technologies have demonstrated significant potential but have yet to reach full-scale adoption. Nevertheless, they show great promise and are worth keeping an eye on.
AI and Machine Learning
Artificial intelligence and machine learning algorithms are now all the rage, but data centers were among the first industries to utilize them.
AI algorithms can automatically place workloads on the most appropriate server based on its requirements and the availability of resources. This optimization ensures that workloads run on suitable servers, reducing the overall load.
AI and ML are also used to predict and prevent failures in a data center by analyzing data from sensors, logs, and other sources. The algorithms identify patterns and trends that indicate an impending failure and take action to prevent it, reducing the need for manual intervention and maintenance.
Lastly, AI and ML algorithms help optimize energy usage in data centers by analyzing energy consumption and workload patterns and identifying opportunities to reduce energy usage, such as powering down idle servers or adjusting the data center’s temperature.
Heat recovery in data centers refers to capturing and reusing the waste generated by electronic equipment. Recovery systems such as heat exchangers or hot water systems transfer waste heat from the data center floor to another system that can use it instead of allowing it to escape into the environment.
Not only does heat recovery save money on heating costs in colder climates, but it also helps maintain a more stable and consistent temperature, which improves the overall performance and reliability of equipment.
In Sweden, there has been an initiative to utilize waste heat generated by the city’s data centers to warm homes. The city’s largest data centers produce more than 100MW of energy, which could potentially power over 80,000 homes. The Stockholm city government aims to use heat recovery to meet 10% of the total heating needs of the city by 2035.
The increasing reliance on information technology has led to the rapid expansion of data centers. However, we are nearing the limits of sustainable growth. Once considered an afterthought, efficiency quickly became the key to growth as we entered an uncertain economic landscape.
As an industry, we must acknowledge our dependence on natural resources and our environmental impact. By minimizing energy consumption for non-IT functions, we are not just looking out for our interests but for the interests of future generations.
While there is still much work to be done, the future of the data center is bright. We are making steady progress, with many innovations on the horizon.
By Ron Cadwell