Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Large nastran jobs 3

Status
Not open for further replies.

josefl9

Mechanical
Apr 19, 2021
22
Hello,

I am trying to run a large nastran job (16272552 DOFs). My computer has 32 Gb RAM and 900 Gb of free HDD space. When excecuting, the scratch file created is quite large (143 Gb) and for some reason, the solution just freezes without warnings. I have found documentation about keywords of memory, maxmemory etc, but I cant quite figure out how to use them based on my PC.

I have also noticed some that are made specifically for cases of large scratch files, such as the scratch mini setting. Do these "simplifications" reduce the accuracy of the results?

Thanks in advance

Josef K.
 
Replies continue below

Recommended for you

Hi Josef,
scratch=mini does not impact accuracy; if you wish to preserve the database to do data recovery only restarts, you can define scr=mini. If you need to do full restarts, like frequency response after a modes analysis, you will need scr=no. If you don't intend to do any restarts, scr=yes is always preferred. Not only does it remove the scratch data at the end of the run, as the job is running, any data that is no longer needed for downstream procedures is deleted from the scratch files, so the hiwater disk usage is reduced over a job that uses scr=no.

Tell me which Nastran you are using and which solution sequence you are running and maybe I can give you the lo-down on memory settings.

DG
 
Hello,

I am using an MSC Nastran 2020 version. I don't really get what you mean by solution sequence, but what I am doing is creating flex bodies (mnf) for MSC Adams. It's basically an eigenmode solution (SOL 103) with some extra options. I don't really need any reruns or anything, I just need to learn how to allow the solution of large proplems on a moderately powerful computer.

For context, I just want to mention that I did manage to run the job succesfully but only on a stronger computer (128 GB ram). So I'd like to know if it is just possible to somewhat ease the job on "smaller" PCs.

Thanks in advance!

Josef K.
 
Hi Josef,

Great, I can help you. The solution sequence you are using is SOL 103.

There are 2 parts to my answer, one about memory which are general remarks for all solution sequences (SOL 101, SOL 103, SOL 108,...) and the other specific to normal modes (SOL 103). I will answer the SOL 103 specific issues in a separate post.

In the following, I will assume you are running MSC Nastran on Windows, not Linux.

First, memory. MSC Nastran requires you to define the maximum amount of memory it is allowed to grab for a job. It may not use all the memory, but it has to know how much it is allowed to take right at the start; it will check if it can allocate this amount of memory and insult you if there is not enough.

When you install MSC Nastran, you get asked what you want to define for the default run parameters, and one of the questions is "do you want to run with mem=max?". If you accepted this, then each time you run MSC Nastran it will request the maximum amount of memory it is allowed to which is defined by another parameter (more on this in a moment). If you answered no and set some constant value for mem=, then that value will be used for every job you run unless you define another setting for mem= at runtime (using the command line keyword mem=xxx).

If you are like me, and don't remember your response to some weird question when you installed MSC Nastran months ago, you can see what you did by looking at the system wide configuration file. Now because I don't know where you have installed MSC Nastran, I am going to give you a relative path description as to where this file can be found. Let <install.dir> be the directory in which you installed MSC Nastran. There will be a subdirectory under <install.dir> called <install.dir>/conf - in this directory you will see a file called NAST20200.rcf. If you have administration privileges on the machine, or if you installed MSC Nastran as a user, you can edit this text file.

Look for the line in NAST20200.rcf that states:

memory=max

or perhaps

mem=max

If the line states something else, like

memory=200M

of some other constant, then that is how much memory MSC Nastran is allowed to use. You can change this, and I recommend you use

memory=max

Now, even though you set memory=max, the maximum amount of memory MSC Nastran is allowed to use is set by another parameter called memorymax. By default, memorymax is set to

memorymax=0.5*physical

and physical is the amount of RAM installed in the machine - in your case, physical=32Gb. So, memorymax will have a ceiling of memorymax=0.5*32Gb = 16Gb. There is a good reason for this. Most of the time, MSC Nastran is being run on a machine shared by other processes which also require memory. If you are the only user of the computer, you can set memorymax=0.85*physical or even memorymax=0.95*physical to the ceiling is higher. I don't recommend using memorymax=physical or Windows will go mad and run out of memory. You can set this in the file NAST20200.rcf with a new line or on the command line as you wish.

Now when you use memory=max, MSC Nastran will be allowed to allocate more memory.

Just because things were not complicated enough, there is another aspect to consider. When you run MSC Nastran with memory=max, it does its best to figure out how much memory it is going to need for solution of the equations but it also sets aside some memory into a disk cache called buffer pooling or BPOOL for short. In my experience, it does not do a great job of estimating the balance between memory for equation solution and memory for BPOOL especially for SOL 103. In this case, also define bpool=4Gb or something reasonable like this, but only for your SOL 103 jobs.

So to summarize:

In NAST20200.rcf, set

memorymax=0.95*physical (or whatever you decide)
memory=max

On the command line of your SOL 103 job, set

bpool=4Gb (or whatever you decide)

With these settings, your job would have 26.4Gb of memory for equation solution and 4Gb for BPOOL to use a total of 30.4Gb memory leaving some dust for the operating system.
 
For the second part of the answer about SOL 103.

Normal modes solution on a 16+ million DOF problem is going to take some resources. In addition, you declared you are creating a flexbody for Adams, which means a static condensation is required (costly in CPU and disk space) followed by a fixed boundary normal modes analysis (with the ASET points you defined fixed) for as many modes as you requested. This creates a Craig-Bampton component of the FE model, but Adams needs a pure modal model (it doesn’t know what to do with static and dynamic degrees of freedom). So the double whammy with MNF generation is another eigenvalue analysis is needed to re-orthonormalize the CB model to a pure modal model needed to create the MNF file.

The memory changes mentioned in my first post will go some way to making things better for a large normal modes run, but for large components or large numbers of modes (or both) you should consider using an automatic substructuring technique called ACMS (an acronym for Automated Component Mode Synthesis). It’s a way of letting MSC Nastran break up the 16M DOF problem into a series of smaller problems. You get the same modal basis (to within a close approximation), and therefore MNF file, at the end only the job runs a LOT faster, consumes less CPU and uses less disk space. The larger the model/number of modes, the better the gain. I have seen 20x faster and more. You will need a license to use ACMS, but it’s a one liner in the input file. Just before the SOL 103 line in the input file, add this line:

DOMAINSOLVER ACMS (UPFACT=5.0)

An additional tip is, if you are not interested in looking at the eigenvectors from the MSC Nastran run that generates the MNF, and you don’t need modal stresses to look at stress results in the Adams output, then remove all data recovery requests (so no DISP=ALL or STRESS=ALL) and don’t generate any post processing files (remove PARAM,POST,xx and MDLPRM,HDF5,xx). If you see this in the input file:

PARAM,PRTMAXIM,YES

Remove this line. Some pre-processors add this line because it gives a one line report of the maximum displacements/spcforces in the job – but it doesn’t tell you where these occur. I don’t know anyone who uses these data, but they take significant resources to compute.
 
Thanks a lot! You have been most helpful.

Kindest regards,

Josef K.
 
dmapguru said:
it also sets aside some memory into a disk cache called buffer pooling or BPOOL for short. In my experience, it does not do a great job of estimating the balance between memory for equation solution and memory for BPOOL especially for SOL 103. In this case, also define bpool=4Gb or something reasonable like this, but only for your SOL 103 jobs.

My config file has "buffpool = 20.0X" (similar for smem). What does that result in relative to my physical memory or max memory allocation? And what is your rule of thumb for "reasonable" here? Was 4GB a general recommendation, or based on the previous user's total memory? FYI I'm running NX Nastran 2021, but I don't think that should drastically affect anything you're recommending here.

I'd also be curious to know what the appropriate syntax and options for CPU control, and what you would recommend. For example, I was under the impression that the newer versions of Nastran don't need to have parallel processing specified and will do it automatically, but perhaps that was misinformation.

Thanks for the detailed explanation on all this! Vey useful and concise explanation. If you have a good reference you could link to that summarizes all the Nastran command line inputs and options I would be very interested to take a look.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor