Rivers

How a distributed federated HPC supported running embarrassingly parallel hydrological simulations for the KNMI’23 climate scenarios

For the new Dutch Climate Scenarios, hydrological simulations spanning a 50,000-year period for 8 different climate scenarios have been run.

Each scenario of 50,000 year is split into 1,667 parallel runs of 30 years, so we needed a compute infrastructure that allows for running many parallel simulations.

Rivers

Background

In October 2023, KNMI presented the new climate scenarios that are representative for the Netherlands. In these new climate scenarios – which are updated approximately every 9 years – special attention was given to the three major rivers crossing the Dutch border: the Vecht, Meuse and Rhine. These rivers play a major role in the Dutch water safety and security, so having climate scenarios that are accurate for these upstream areas is vital. In this project, we simulate 8 different 50.000 year scenarios (1 historical and 7 future scenarios) for two river basins (the Rhine and the Meuse) to understand how extreme discharge events change as a result of a changing climate.

To understand how extreme discharge events change, we use a hydrological model to simulate the hydrological response of these basins. For both basins, we used the wflow_sbm model (van Verseveld et al., 2022). Running these long simulations gives us the opportunity to simulate events that are more extreme than have been observed.

Challenge

In the past years, we moved from a lumped model to a computationally intensive semi-distributed model, wflow. Running simulations of 50,000 years using Wflow would require substantial computation times, while the project deadline required us to run the simulations in only a couple of months. Even when splitting each 50,000-year simulation into 1,667 30-year blocks, to allow for parallel computations, would require an infrastructure that is able to continuously run many parallel simulations. Downtime of this infrastructure should be restricted to a minimum, given our tight schedule and deadlines.

At one point, one of the HPC providers experienced issues with their infrastructure, which resulted in many compute nodes being down. Thanks to the distributed federated infrastructure, we were able to quickly upscale the runs at another HPC provider, until the issues were resolved. This ease of switching between HPC sites in the C-SCALE federation ensured that we were able to finish the simulations before the deadline of the project.

Support from C-SCALE

This collaboration was extended after successfully developing and implementing the “Automated monthly river forecasts using Wflow” workflow solution. Support from SURF and EODC was highly appreciated to extend this workflow for our KNMI requirements.

C-SCALE services used

We were able to use the C-SCALE workflow solution “Automated monthly river forecasts using Wflow” as a template for our work, to quickly setup our workflow on top of FedEarthData.

Support from the C-SCALE providers (SURF and EODC) contributed greatly to quickly deploying the workflow on the different clusters (Spider and VSC, respectively).

Testimony by Joost Buitink (Deltares)

Joost Buitink (hydrologist at Deltares) is closely involved in the climate scenario project and was responsible for performing the simulations. He explains that this collaboration between Deltares, SURF and EODC was vital for the project: “To provide the most accurate discharge projections, we wanted to perform the hydrological simulation with the new, but computationally more demanding, wflow_sbm models. However, the compute resources at Deltares were not sufficient to ensure we could meet the deadline. This collaboration ensured that we had reliable and scalable compute infrastructure to run our 50.000 years of hydrological simulations for the new Dutch Climate Scenarios. C-SCALE provided us with a distributed federated HPC solution that allowed us to run thousands of parallel simulations across different clusters, without worrying about the availability or performance of each cluster. C-SCALE also helped us to quickly set up our workflow using their existing template for Wflow, which saved us a lot of time and effort.

The existing Wflow C-SCALE workflow solution, together with excellent support from the C-SCALE compute providers, helped us greatly with getting a workflow ready to orchestrate all the simulations. Especially during the time where the HPC infrastructure from one of the providers was down, we were able to quickly shift some resources to the other provider. This way the downtime had hardly any effect on the timeline of the project.  We are very happy with the results of our collaboration with C-SCALE, as we were able to complete our simulations before the deadline and generate valuable insights for the new climate scenarios. We would like to express our huge thanks to EODC and SURF for making this possible

 

More information and relevant publications

van Verseveld, W. J., Weerts, A. H., Visser, M., Buitink, J., Imhoff, R. O., Boisgontier, H., Bouaziz, L., Eilander, D., Hegnauer, M., ten Velden, C., and Russell, B.: Wflow_sbm v0.6.1, a spatially distributed hydrologic model: from global data to local applications, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd-2022-182, in review, 2022.

The report with the conclusions based on the simulations will be published at the beginning of 2024.

No Comments

Sorry, the comment form is closed at this time.