Implementing Simcenter StarCCM+ CFD Simulation on AWS HPC Environment

CAD and Simulation Industry is not new to utilizing HPC (High Performance Computing) environment to solve CFD (Computational Fluid Dynamics) challenges. There has been a steady growth in implementing HPC Cluster on the Cloud for solving compute and graphics intensive CAD and CAE Workloads. Adopting Cloud for a task like this makes sense because of its inherent advantages like extreme scalability, pay-as-you-do model. There can be demanding workloads requiring scaling up from a few 100’s to a 1000’s of cores in order to run a Simulation Solver Job. Procuring and maintaining such infrastructure on-premises not only is cumbersome, but also expensive as the solving tasks are not run every day and all the time. In this blog, we shall provide a high-level overview of running an on-demand StarCCM+ Benchmarks on AWS HPC Environment.

Amazon API HPC environment gateway developed by AWS Parallel Cluster CLI

The diagram shows a High-level architecture of the components involved in running a CFD workload on HPC environment created by AWS Parallel Cluster CLI.

  • FSx Lustre: Amazon FSx for Lustre is the world’s leading High-Performance File system meant for Linux and offers benefits such as sub-millisecond latencies, Hundreds of Gigabytes per second Throughput & millions of IOPS. In our implementation, we have used an S3 Bucket to store our StarCCM+ Installation files and other necessary files and mounted that S3 Bucket on our Cluster Master Node as /fsx Partition. This helps in loading the installation files and Case data easily and swiftly on the Master Node.
  • EFA (Elastic Fabric Adapter): EFA is a Network Interface for EC2 that enables customers to run applications and workloads that require high levels of inter-node communication by using MPI (Message Passing Interface) between the nodes. We have configured the Compute Cluster EC2 Servers with EFA Enabled and Cluster Placement Group as Dynamic to ensure that every Compute-optimized C5n EC2 server that is created in the Cluster is placed close to each other physically on AWS infrastructure so that EFA can offer the best and fastest inter-node communication speeds since there will be data sharing between the nodes while solver jobs are being executed.
  • Master Node: The Master Node in our implementation is a compute-optimized C5.xLarge EC2 Machine with Nice DCV Desktop GUI pre-configured. It is also configured with SGE (Son Grid Engine) Job Scheduler and CentOS 7 as the Operating System. All these components are pre-installed and readily available when the Master Node is initialized.
  • Compute Node Cluster: The Compute Node Cluster comprises of C5n.18xLarge EC2 Machines which are auto-scaled based on Scheduler Job requirements and Job Core count requirements.

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.