Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Abaqus standard/direct and MPI

Status
Not open for further replies.

gphilos

Bioengineer
May 12, 2014
1
Hello,

As an IT engineer i am trying to setup Abaqus 6.13 to run on a cluster using MPI.
I define the mp_host_list with the list of machines that i want to participate
in the computation. However, i see that the domain decomposition is done based
on the processes (hosts) and not on the total number of cpus. Lets say that
we have 2 hosts: hostA (2cpus), hostB (4cpus). If i run the abaqus standard/direct on a model
with X elements, specifying 6 cpus, then the .dat file reports that the decomposition
is done as follows: hostA: X/2 elements, hostB:X/2 elements. I would expect that
the decomposition is done based on the number of cpus, so that hostA gets X/3 elements
and hostB gets 2X/3 elements. In case of using the iterative solver, things are different
and decomposition is done "correctly". The iterative solver utilizes processes instead of
threads. However, i believe that the iterative solver is only beneficial in some cases.

Any help would be appreciated.

Thank you,
George
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor