The High Performance Computing (HPC) Project targets a priority in the Australian Astronomy Decadal Plan (2016 – 2025) of world-class high-performance computing (HPC) and software capability for large theoretical simulations, and resources to enable processing and delivery of large data sets from these facilities.
AAL aims to address this priority through the purchase of high performance and cloud computing resources in partnership with the National Computational Infrastructure (Gadi and ODR), CSIRO (CASDA), ARDC (Data Retention Project), YANDAsoft, and Swinburne University of Technology (OpenStack).
AAL purchases service units on Gadi, the new Fujitsu system at the National Computational Infrastructure (NCI). This time is available to the Australian astronomical community via a competitive review process, overseen by the AAL Supercomputer Time Allocation Committee (ASTAC).
Researchers affiliated with an institute in Australia are eligible to apply with the time awarded to projects running large-scale parallel computations.
AAL has partnered with NCI to establish the Australian Astronomy Optical Data Repository (ODR).
The focus for this project is that NCI will provide the data repository operation including data management, data services and software and data release processes, which underpin:
CASDA is a collaboration between CSIRO and the Pawsey Supercomputing Centre to build, store and archive ASKAP data and to make the data accessible to astronomers around the world. CASDA stores science-ready data products produced by the CSIRO custom-built software package ASKAPsoft.
CASDA has been in development since 2013, with the first release in late 2015. Several new releases of CASDA with additional enhancements have been made in the last few years. More information on CASDA can be found on their website.
Demand for data is growing at an exponential rate and along with it, the need to properly collect, store and analyse the increasing amount of information collected by Australian researchers. Applying metadata is also important to make the data findable and reusable. Tackling this rising problem in the world of research, the Australian Research Data Council (ARDC) have embarked on a new three-year project that will see them partnering with Australian research organisations to co-invest in new data storage capabilities and resources to manage datasets of national significance central to the project.
AAL has partnered with ARDC for Phase 2 of this Data Retention Project, with an aim of enriching nationally significant astronomy data collections by the consistent and controlled application of metadata. The project will enable Australian researchers to maximise the value of these collections and give future researchers timely access to high quality data collections supported by stable infrastructure. Further information can be found on the ARDC website.
This project commenced in 2019 with the aim to provide support to the wider ASKAP community to install and use YANDAsoft (formerly ASKAPsoft, the astronomical calibration and imaging software pipeline for ASKAP) on HPC platforms outside of the Pawsey Supercomputing Centre. The intent of this undertaking was to maximise the quality and efficiency of the science from the ASKAP telescope. This funding enabled CSIRO to:
AAL support for YANDAsoft concluded at the end of 2021 but YANDAsoft development continues as part of the Australian SKA Regional Centre Design Study Program.
Swinburne University of Technology offered readily available access to 1,000 OpenStack Virtual Machines (VM) on the dedicated Swinburne cell within the NeCTAR Cloud network to the national astronomy community. This project facilitated uptake of existing NeCTAR resources by the astronomy community by developing a customised astronomy interface, conducting user tutorials, and identifying small job use cases on OzSTAR that were suitable for transfer from HPC to the VM infrastructure.
This project has now concluded but Swinburne will continue to make the project resources available throughout 2022, including access to the OpenStack VMs, access to the sstar compute nodes, availability of user documentation, online training materials and direct assistance if required.