Issue |
EPJ Web of Conf.
Volume 295, 2024
26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023)
|
|
---|---|---|
Article Number | 07012 | |
Number of page(s) | 5 | |
Section | Facilities and Virtualization | |
DOI | https://doi.org/10.1051/epjconf/202429507012 | |
Published online | 06 May 2024 |
https://doi.org/10.1051/epjconf/202429507012
Data Centre Refurbishment with the aim of Energy Saving and Achieving Carbon Net Zero
School of Physical and Chemical Sciences, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
* e-mail: d.traynor@qmul.ac.uk
** e-mail: r.a.owen@qmul.ac.uk
*** e-mail: j.hays@qmul.ac.uk
Published online: 6 May 2024
Queen Mary University of London (QMUL) as part of the refurbishment of one of its’s data centres will install water to water heat pumps to use the heat produced by the computing servers to provide heat for the university via a district heating system. This will reduce the use of high carbon intensity natural gas heating boilers, replacing them with electricity which has a lower carbon intensity due to the contribution from wind, solar, hydroelectric, nuclear and biomass sources of power sources.
The QMUL GridPP cluster today provides 15PB of storage and over 20K jobs slots mainly devoted to the ATLAS experiment. The data centre that houses the QMUL GridPP cluster, was originally commissioned in 2004. By 2020 it was in significant need of refurbishment. The original design had a maximum power capacity of 200KW, no hot/cold aisle containment, down flow air conditioning units using refrigerant cooling and no raised floor or ceiling plenum.
The main requirements of the refurbishment are: To significantly improve the energy efficiency and reduce the carbon usage of the University; Improve the availability and reliability of the power and cooling; Increase the capacity of the facility to provide for future expansion; Provide a long term home for the GridPP cluster to support the computing needs of the LHC and other new large science experiments (SKA/LSST) into the next decade.
After taking into account the future requirements and likely funding allocation, floor space in the data centre and the space available to house the cooling equipment the following design was chosen: A total power capacity of 390KW with redundant feeds to each rack; 39 racks with an average of 10KW of power per rack (flexible up to 20KW); An enclosed hot aisle design with in row cooling units using water cooling; water to water heat pumps connected to the universities district heating system
An overview of the project, it’s status and expected benefits in power and carbon saving are presented.
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.