9 November 2018 update

Year 1: Building our Petascale Campus, how the University is delivering on its five year plan to uplift informatics, data and compute capabilities for research

We are coming to the end of the first year of the Petascale Campus Initiative (PCI). Today Prof Andrew Turpin, the PCI’s Senior Academic Convenor, is pleased to report progress on the first of four key strategic actions aimed at addressing this complex set of challenges, being to:

  • build the necessary infrastructure and services for a digitally-enabled research campus
  • future-proof its operations and resources
  • recognise the software practitioners and domain experts who support data-intensive research
  • streamline research data management.

Building the necessary infrastructure and services for a digitally-enabled research campus

The PCI was established in the face of three basic facts: our research computing infrastructure was aging and insufficient; data demands were growing every year (the amount of data ingested into our University systems has doubled each year for the last three years); and, our renewal and acquisition processes needed to be more responsive to this growing demand. The University Executive approved the project in March and Chancellery established the researcher-led PCI Team.

One of the Team’s first tasks was to oversee the acquittal of the November 2017 University Capital Advisory Group’s approved allocation of $7.25m for urgent infrastructure replacements, upgrades and improvements. The spend comprised:

Cloud capacity - $2.49m
(includes 640 new large capacity cores for larger virtual machines)
High-end computing
Spartan cores - $1.04m
GPGPU storage - $0.54m
Computation project storage - $1.5m
Research Data Storage - $0.81m
(includes server infrastructure to enhance Mediaflux data management platform)              
Connectivity / networks - $0.77m – switches, racks, cables, upgrades
Structural review - $0.1m– establishing a services roadmap for future planning.

These funds have been combined with 1000+ cores provided by the transfer of Melbourne Bioinformatics’ Snowy system to general use, additional investment from faculties and external funds to replace, and in some instances uplift, our research computing capacity.

We’ve now established January 2018 baseline figures for general compute and storage capacity to be able to compare that with December 2018 and beyond. These are:

 JanuaryDecember
Cloud capacity – CPU cores1      5168         5888
High-end computing  
   Spartan cores      1112         3272
   GPU cores        292           580
Computation project storage - terabytes      1411         3847
Research Data Storage - terabytes      3824         3800

1  CPU/GPU cores: CPU is the processing unit that receives instructions and performs calculations, or actions, based on those instructions - processors can have a single core or multiple cores, the number of cores often aren’t comparable between architectures, and GPU cores are different than CPU cores. These units are provided here to offer baseline comparisons only.    

Additional capacity as provided and serviced through private and externally-funded systems (eg. an Australian Research Council Linkage Infrastructure, Equipment and Facilities grant funded the GPU cores), brings the total research computing environment at the University at the end of 2018 to approximately 15,000 cores of computing capacity and 8.5 petabytes of usable storage.

University Services recently updated users and the Team on progress made in 2018 to meet researcher demands for improvements in cloud, high-end computing, storage, data management and training. Now our task is to co-design the services and infrastructure roadmap - using expertise across Faculties and University Services - so it reflects the research community’s priorities and meets budget constraints. We know that research data storage remains a particular headache for us all. Some of this has been addressed in the 2018 refresh of the University’s storage environment, however given projected data growth, it will remain a focus for 2019/2020 and beyond.

The Team has its mandate, set by the University’s research leadership for this project - to deliver a truly consultative process around investments in hardware and skills to ensure our researchers operate in a globally-competitive, data-intensive University. This requires complex communications, high-level technical translation skills and a great deal of goodwill across Faculties and business divisions.

We’re excited to be both engaging with and representing our research community and working with our colleagues in  University Services to drive this process. We look forward to providing many more positive updates in the future.

Prof Andrew Turpin, Senior Academic Convenor, PCI

_______________________________________________________________________________________

Detailed system specifications and the current operating status of University research computing systems can be found at https://dashboard.hpc.unimelb.edu.au/

Contact: Helen van de Pol, Communications Lead | helen.vandepol@unimelb.edu.au | 03 8344 2055 | 0488 920235 .