dfgsdfgfd
Mechanical
- Sep 30, 2015
- 37
Hi everyone,
I'm trying to run an Abaqus Explicit simulation on a cluster via bash. Regardless on how much memory I allocate (I tried up to 60 gb), it's not enough and I get the error message:
The error appears during the preprocessing after the STABLE TIME INCREMENT INFORMATION is created in the *.sta-file, before the actual calculations even start. On my laptop (16 gb ram), the code runs without any problems.
As I'm running a Python script with CAE internal functions, I can't use the memory="XX gb" command. Also, the memory=XXX, memoryUnits=MEGA_BYTES command when creating the job doesn't seem to have any effect.
Thanks very much!
I'm trying to run an Abaqus Explicit simulation on a cluster via bash. Regardless on how much memory I allocate (I tried up to 60 gb), it's not enough and I get the error message:
TERM_MEMLIMIT: job killed after reaching LSF memory usage limit.
Exited with exit code 1.
The error appears during the preprocessing after the STABLE TIME INCREMENT INFORMATION is created in the *.sta-file, before the actual calculations even start. On my laptop (16 gb ram), the code runs without any problems.
As I'm running a Python script with CAE internal functions, I can't use the memory="XX gb" command. Also, the memory=XXX, memoryUnits=MEGA_BYTES command when creating the job doesn't seem to have any effect.
Thanks very much!