OnScale Blog

Our blog covers tips for using OnScale, new features and developments, and upcoming events and webinars.  Subscribe and get the latest posts in your inbox.

All Posts

Solve a 10 Billion Degrees of Freedom Problem With OnScale


Anyone who has worked in engineering simulation knows the feeling. Your coffee is freshly brewed, and you’re all set for a successful day of changing the world and then you get one of these little messages from your favorite simulation tool:

“Solver out of Memory”

or perhaps worse still

“Remaining simulation time: 7,000 years”

We all get it. Finite Element Analysis (FEA) is extremely computationally intensive, and sometimes we come across a problem that is just a little too big to solve practically. From my perspective these typically fall into two areas:

  • The problem is too large to fit into my machine’s RAM. I need to convince my boss to buy me a new machine
  • The problem is going to take too long to solve to be of practical use. Even if I convince my boss to buy me a cutting-edge server, I’m sceptical about how much I could really reduce the solve time

As an engineer I want to focus on the design problem that I’m trying to solve, but I’ve frequently found myself spending hours trying to simplify my simulation enough to fit it into the hardware that I have available. It’s definitely made me miss deadlines and targets over the years.

So, ideally, we want to solve problems quickly, regardless of size. I call this the 10 billion degree of freedom (DoF) problem. It sounds like an insanely large number, but is surprisingly one that’s come up a number of times over the years in my work on simulation ultrasonic transducers. When it does, it usually goes something like:

“We would love to simulate this, but it would be a 10 billion DoFs… so that’s not going to happen.”

So how can we solve such a large problem? Is it even possible?

Well, there’s more than one way, but at OnScale we make use of Cloud High Performance Computing (HPC) to do the heavy lifting. The recipe looks something like this:

  1. Run your simulations on the cloud, so you don’t need to make use of local hardware
  2. Develop a way of parallelizing simulations across a large number of cloud nodes – what we call Cloud HPC
  3. Give users completely flexible access to the platform, so that they can make use of it whenever they need it, without any complex setup

Sounds great, right? But is this kind of simulation actually useful?

Well, at the moment, we can use OnScale to accelerate mechanical and electro-mechanical time-domain simulations. We’re working on extending our software to encompass our full range of Multiphysics solvers.

I’ll give you some examples. We do a lot of work simulating ultrasonic flow meters, which are used to measure flow rates of liquids and gasses through pipes – think oil and gas metering, or Formula 1 fuel flow sensors. These typically require very large, explicit time domain simulations of both elastic and acoustic waves, which can often push the 1 billion DoF mark. We like to push boundaries here at OnScale, so we recently ran a simulation that was a bit larger and tried a 10 billion DoF problem, simulated over 2,000 timesteps.

 

10 billion degree of freedom (DoF)

 

We split the simulation across 4,096 cloud cores and set it running. We’d hoped this would be pretty fast, but even we were surprised by the results.

The full simulation ran in 8 minutes.

Intrigued, one of our engineers methodically tested how long the simulation would take on various numbers of cores. He came back with the following graph.

 

This graph is showing the speed increase we get by assigning additional cores to the simulation.

What we saw was very close to linear speed up vs. cores, making the system very scalable. It also means that we can run this problem around 300x faster than we used to (before we could run OnScale on the cloud) when running it on a typical 16 core desktop (ignoring the fact that the desktop would also require 464 GB of RAM).

Intrigued by the results, our engineers wanted to push the boundaries a bit more. Another area where we often see very large problems is 5G RF filter design. Applications such as 3D Bulk Acoustic Wave (BAW) filter simulations often require 100s of millions of DoF and can potentially take weeks to run on local hardware. The active material is piezoelectric, which means that such 5G RF simulations require an electro-mechanical solution, which is computationally expensive. Once again, we tackled this with our time domain solvers, running a 1 billion DoF BAW filter problem as shown below.

 

 

Our engineers took bets on how long it would take for OnScale to run this simulation on the cloud utilizing 4,096 cores. It took 6 hours 48 minutes. Not the ‘less than it takes to make a cup of tea’ result like the flowmeter, but still much better than with any other legacy simulation tools currently available.

We are often asked, well what is the limit? Can you run infinitely large simulations? I think we’ve yet to explore that. Is solving a 10 billion DoF a panacea for all of my simulation woes? Definitely not! But being able to run large problems quickly does remove one of the major roadblocks in engineering simulation.

I’ll not suggest running a billion DoF simulation as your initial OnScale model, but I’d encourage anyone who’s interested to try some of our example models here!

Thanks for reading!

Get Started With OnScale Today



Andrew Tweedie, UK Director at OnScale
Andrew Tweedie, UK Director at OnScale
Andrew Tweedie is our Founder and UK Director at OnScale. He is a multi-disciplined engineer with 15 years of experience in computer-aided engineering (CAE). Andrew supports EU growth and manages OnScale’s office in Glasgow, Scotland. He holds an Eng.D in Non Destructive Testing from University of Strathclyde.

Related Posts

What is the Piezoelectric Effect?

Let us provide you with a very simple definition first to get things clear. Certain materials tend to accumulate electric charges when a mechanical stress is applied to it. The piezoelectric effect is an effect that simply describes the fact that a pressure applied to a piezoelectric material will generate a voltage.

How to Optimize SAW Filter Cut Angle in OnScale

In this article we discuss how to pole piezoelectric materials in OnScale and walk through an example of how to rotate the material properties of Lithium Tantalate for the Y-cut angle in an LT-SAW.

A History of the Piezoelectric Effect

In 1880 brothers Pierre Curie and Jacques Curie were working as laboratory assistants at the Faculty of Sciences of Paris. They discovered that applying pressure to crystals such as quartz, tourmaline and Rochelle salt generates electrical charges on the surface of these materials. This conversion of mechanical energy into electrical energy is called the direct piezoelectric effect. “Piezo” is derived from the Greek for “to press”.