[ad_1]

Digital transformation has been in progress over the past decade. With the growth of data-intensive industries such as cloud services and the Internet of Things (IoT), the total size of data centers has steadily expanded. In addition, the current booming development of artificial intelligence (AI) and machine learning (ML) technologies has further driven the sharp growth in data center demand. According to Global Market Insights, the global data center infrastructure market size will grow at a compound annual growth rate of more than 12% between 2023 and 2030.
As the data center expands in scale, it will lead to a significant increase in energy consumption. For the U.S. market alone, McKinsey predicts that total data center power demand will reach 35 gigawatts (GW) by 2030, doubling from 2022 demand of 17 gigawatts (GW). This trend not only means that the operating costs of data centers will increase significantly, but its negative impact on environmental sustainability will also be further intensified.
The breakthrough development of artificial intelligence is inseparable from high-performance GPU and CPU support. Such chips release a lot of heat when running AI and ML workloads, such as training large language models (LLM). However, the ideal operating temperature range for data centers is typically between 18°C and 27°C. If operators cannot effectively control the continued increase in data center temperature, it will lead to equipment failure, system shutdown, and ultimately huge economic losses.
Statistics show that about 30% of data center downtime failures are related to abnormalities in its operating environment. Related influencing factors include temperature, humidity, and air flow. Therefore, ensuring that the data center remains within the optimal operating temperature range is critical to the smooth operation of workloads. Overall, the increasingly severe data center cooling challenges have become an urgent problem that operators need to solve.
Regarding the issue of calorific value scale, we can find out through a specific case. Technology giant Meta recently announced that it will purchase 350,000 high-performance GPU H100s launched by Nvidia. The maximum thermal design power (TDP) of this GPU is around 300W to 700W.
Preliminary estimates indicate that once all 350,000 GPUs are put into operation, Meta-related data centers will have an additional heating capacity of approximately 100 to 250 megawatts (MW). According to the U.S. Department of Energy’s calculations, this is equivalent to the heat generated by the electricity used by 80,000 to 200,000 U.S. homes. In other words, Meta needed an efficient way to remove the equivalent of a small city’s electricity consumption and heat from the data center.
In addition to technology giants, data center cooling challenges are increasingly common for small and medium-sized enterprises. The reason is that on the one hand, the power of data center cabinets is constantly upgrading. For example, cabinets with 70 kW power and above are gradually becoming more common, resulting in a significant increase in the total amount of heat generated. On the other hand, traditional air-cooling heat dissipation technologies, such as open air cooling, , air-enclosed air cooling, etc., when the cabinet power reaches 50 kW, its heat dissipation effect is obviously insufficient.
Therefore, data centers are in urgent need of more efficient and energy-saving cooling solutions. Fortunately, as a heat transfer medium, liquid has better thermal conductivity than gas, which brings us a more effective choice. At the same time, we can see continued innovation in liquid cooling solutions, from liquid-assisted air cooling to full immersion liquid cooling. Liquid cooling technology not only helps data centers effectively cope with the cooling challenges brought by AI workloads, but also further improves the overall energy utilization efficiency. Next, let’s take a brief look at the following four different types of liquid cooling technologies.
1. Liquid-assisted cooling
The simplest liquid cooling solution is to combine liquid cooling with an existing air cooling system to increase the overall cooling capacity. For example, the rear door heat exchanger (RDHX) replaces the standard rack door with a door with a built-in liquid-to-air heat exchanger, which can effectively dissipate the heat generated by the chip and assist in improving the heat dissipation effect of air cooling.
Another solution in this category is to increase cooling efficiency by isolating the cold air distribution and hot air return paths from the open room air. The HPE Adaptive Rack Cooling System (ARCS) adopts this approach and can simultaneously provide cooling capacity for four cabinets with a total power of up to 150kW, thus extending the operational life of the data center.
2. Direct Liquid Cooling (DLC)
A further cooling solution is to allow the coolant to come into direct contact with heat-generating components such as CPUs and GPUs. For example, the HPE Apollo Gen10 server uses the HPE Apollo direct liquid cooling (DLC) system. This highly integrated cooling system design reduces total server fan power consumption by 81%. Since cooling system energy consumption typically accounts for 40% of the total energy consumption of a data center, the use of DLC technology can bring significant energy savings.
3. Fully Enclosed Liquid Cooling
Servers that use fully enclosed liquid cooling technology have their heating elements cooled by liquid in a closed-loop system without relying on the air environment for heat dissipation. HPE Cray Supercomputing EX systems use this technology to efficiently conduct heat to the power-hungry HPE CPU and GPU components in supercomputers. One of the typical application cases is “Frontier”, one of the world’s most powerful supercomputers, which topped the list of the world’s top 500 supercomputers released in November 2023.
Fully enclosed liquid cooling technology can not only support supercomputers to achieve unprecedented computing performance, but also significantly improve the sustainability of the system. It is worth mentioning that in the Green500 list released at the same time, 6 of the top 10 supercomputers used HPE Cray EX systems equipped with fully enclosed liquid cooling technology. This system enables Frontier to be both powerful and sustainable, ranking eighth on the list with an energy efficiency of 52.59 GFLOPS/W, making it one of the most environmentally friendly supercomputers in the world.
4. Immersion Liquid Cooling
Immersion liquid cooling is the ultimate solution for liquid cooling technology. It refers to immersing electronic components or an entire server directly in an insulating coolant, where the liquid directly absorbs the heat generated by all components. Because servers using this cooling technology have their individual components and coolant sealed in the chassis, they are ideal for edge computing scenarios with harsh conditions.
HPE’s OEM partner Iceotope has joined forces with HPE and Intel to launch the Ku:l data center immersion liquid cooling solution. This is a fully enclosed, immersed, liquid-cooled, ruggedized mini data center powered by HPE ProLiant DL380 Gen10 servers. HPC benchmarks show that the data center solution’s immersion cooling system improves single-rack performance by 4% while consuming 1kW less energy than traditional air-cooled systems. Iceotope said the solution achieved overall 5% energy savings in IT operations, and could achieve a further 30% energy savings if deployed at scale. (Read “Liquid cooling solutions from HPE” for additional information)
From extreme edge computing scenarios to the world’s most powerful supercomputer systems, from new data centers to upgrades of existing data centers, liquid cooling technology has shown significant advantages in meeting the heat dissipation challenges of high-performance chips and has become an Data center energy efficiency, key technologies for sustainability.
Traditional air-cooling systems often fall short when it comes to handling the heat generated by today’s high-performance servers, leading to excessive energy consumption and operational costs. In contrast, a liquid cooler offers a more effective solution by directly dissipating heat from critical components, thereby maintaining optimal temperatures with far less energy. By utilizing a liquid cooler, data centers can not only achieve superior thermal management but also contribute to environmental sustainability. This innovative approach is essential for creating eco-friendly, cost-efficient data centers that are equipped to meet the demands of the future.
selecting the right liquid cooling solution for your data center hinges on various factors such as the scale of your operations, budget constraints, and specific cooling requirements. Whether you opt for direct-to-chip cooling, immersion cooling, rear door heat exchangers, or overhead liquid cooling, each has its own set of benefits tailored for different scenarios. It’s crucial to assess your needs meticulously and choose a solution that not only optimizes performance but also ensures energy efficiency and sustainability.
For those looking to purchase high-quality data center products, Router-switch.com is a highly recommended option. With 22 years of experience in the industry, we are a trusted provider known for our competitive pricing, reliable products, and excellent after-sales support.
[ad_2]
Source link