An international team of researchers from Ludwig-Maximilians-Universität Munich (LMU) and Technical University of Munich (TUM) are using powerful computer simulations to gain a better understanding of rare, extremely dangerous seismic phenomena.

The scientists used the Sumatra-Andaman Earthquake as a starting point for their research. This event is considered to be the strongest recorded earthquake in history, which ripped a 1,500 kilometer tear in the ocean floor off the coast of the Indonesian Island of Sumatra.

It lasted between eight and ten minutes, lifting the ocean floor several meters and triggering a tsunami with 30 meter waves that destroyed whole communities.

Earth scientists were unable to predict it because relatively little data exists regarding these events despite major advancements in earthquake monitoring and warning systems.

Performing the simulation

The goal of this endeavor was to simulate “coupled” simulations of both earthquakes and subsequent tsunamis, according to the announcement.

Germany’s SuperMUC supercomputer, which resides at the Leibniz Supercomputing Centre, was used for this experiment.

Typically, performing earthquake simulations requires using a computational grid to divide the simulation into many small pieces. Then researchers compute specific equations for different aspects of the simulation, like generated seismic shaking or ocean floor displacement.

Accurate simulations entail having a finer grid, but this requires more demanding computational power. The computation can become complicated based on the more complex geometry of the earthquake.

"Modeling earthquakes is a multiscale problem in both space and time," said Dr. Alice Gabriel, the lead researcher from the LMU, in a statement. "Reality is complex, meaning that incorporating the observed complexity of earthquake sources invariably involves the use of numerical methods, highly efficient simulation software, and, of course, high-performance computing (HPC). Only by exploiting HPC can we create models that can both resolve the dynamic stress release and ruptures happening with an earthquake while also simulating seafloor displacement over thousands of kilometers."

The team bypassed these issues by exploiting a method called “local time stepping.” This technique essentially enabled them to “slow down” the simulation by performing more time steps in the areas that need more spatial detail. Other sections that needed less detail could execute much bigger and far fewer time steps.

Ultimately, this method helped the team simulate 1,500 kilometers of non-linear fracture mechanics coupled to seismic waves traveling up India and Thailand over a little more than 8 minutes of the Sumatra-Andaman earthquake.

"Our general motivation is to better understand the entire process of why some earthquakes and resulting tsunamis are so much bigger than others," said TUM Professor Dr. Michael Bader, in a statement. "Sometimes we see relatively small tsunamis when earthquakes are large, or surprisingly large tsunamis connected with relatively small earthquakes. Simulation is one of the tools to get insight into these events."

The end result produced a 13-fold improvement in time to solution.

Further advancement

Results from this experiment could indicate that extremely large-scale runs like this are invaluable for the team to gain deeper insights in its research. Further development of these next-generation computers could potentially launch simulations that show urgent, real-time scenarios that can help predict hazards as they relate to likely aftershock regions.

"We have been doing one individual simulation, trying to accurately guess the starting configuration, such as the initial stresses and forces, but all of these are still uncertain," added Bader. "So we would like to run our simulation with many different settings to see how slight changes in the fault system or other factors would impact the study. These would be larger parameter studies, which is another layer of performance that a computer would need to provide."

The paper summarizing these findings was nominated for the best paper award at SC 17, the international conference for High Performance Computing, Networking, Storage, and Analysis taking place in Denver, Colorado.