Data centers are evolving faster than ever in order to accommodate the constant changes in the world of IT. A new report from 451 Research reveals some of the latest trends in data center setups. One of the main trends is that data center rack densities are getting higher and this will continue for the time being.
For a long time the average density was 5 kW per rack. Now this is changing quickly, the report shows. 45% of respondents say they expect their average density to be about 11 kW per rack or higher. And this will happen over the next year. By comparison, in 2014, only 18% of respondents had densities over 10 kW. Also 35% of enterprises say density is crucial for deciding where to place workloads.
What brings the changes?
In short – everything. Data consumption is rising, the use of cloud services is going up and a lot of new technologies are gaining ground. One of them is artificial intelligence. While AI development might still be in very early stages, it already needs a lot of data and computing power. As a result, servers use more and more electricity and generate more heat.
This brings a lot of changes to data centers. They have to modify their traditional methods and improve cooling, employ new options and technologies. “New chips are impacting density. AI and new applications need a lot more energy per chip, and this has implications for the data center. People are expecting this to continue, and it’s not going to easily be able to handle”, says to Data Center Frontier Kelly Morgan, VP of Data center Infrastructure & Services at 451 Research.
Morgan expects that some data centers won’t be able to support the required densities. Other data centers will change their cooling systems and operating strategies. Some will offer higher-density as an additional service. This could help them balance out the costs and win them some extra time to remodel their facilities. It’s still going to be a big challenge that data centers will have to tackle.
More changes are coming
In order to accommodate these new density requirements, data centers will need to do more than simply add extra hardware. They will need some new technologies and chips to help them out. Companies are exploring different ways to optimize the cooling, for example. Google is experimenting with liquid cooling, as is Microsoft. Facebook is trying out a new method of air cooling for hotter climates. Smaller companies are also developing new solutions including fully immersed racks. They are submerged in a cooling fluid which helps reduce the footprint and will cool off more easily under high-performance loads.
New hardware is also on the horizon. Nvidia, Intel and many smaller companies are all working on specialized hardware to boost AI and other high-performance workloads. The main goals are to improve the efficiency, to lower the power consumption and to offer solutions which are optimized for specific workloads. Achieving all of this is going to be a challenge, but it will get done, because all the big names are working on it. Plus, there are plenty of other projects which tackle the challenge in various ways. Some simply try to improve what’s already here. Others are working on an entirely “new model for high-performance silicon design” as is with NUVIA.
Cerebra Systems, on the other hand, wants to remodel the form factor for data center computing. Instead of using small chips, it’s going big. Very big. It created the Wafer –Scale Engine which is the largest chip ever at around 22cm in width. It needs 15 kW to cool off, but it plans to offset that with sheer size and power. It’s definitely going to be an interesting project. And it’s just the start. 451 Research expects that there will be plenty of opportunities. Service providers are just scratching the surface of possible innovations for data centers.