donaldhume
Bioengineer
- May 30, 2015
- 8
Hi all,
I'm running a strain model with millions of elements which takes several hours. Our group has recently started running simulation work on GPU clusters and I thought I might try to take advantage of the new hardware for some of these ABAQUS/Implicit jobs. I've found the results to be quite perplexing in R2018x. I've included all of the timestamp data for each of two cases and was hoping someone might either point me toward why I'm seeing this discrepancy, and/or literature that highlights best practices for job formulations which take advantage of GPU+CPU combination. I was under the impression my job which takes advantage of the gpu would run significantly faster.
I do monitor the gpu using nvidia-smi and it is most certainly being used for parts of the simulation.
job=x cpus=16 double=both int
JOB TIME SUMMARY
USER TIME (SEC) = 2.24008E+05
SYSTEM TIME (SEC) = 12359.
TOTAL CPU TIME (SEC) = 2.36367E+05
WALLCLOCK TIME (SEC) = 20171
job=x cpus=16 gpus=1 double=both int
JOB TIME SUMMARY
USER TIME (SEC) = 1.99401E+05
SYSTEM TIME (SEC) = 18010.
TOTAL CPU TIME (SEC) = 2.17411+05
WALLCLOCK TIME (SEC) = 39364
These two cases are run on the same box. As far as I can tell, the job with the added GPU takes almost twice as long to run based on the wall clock time. It is possible I am misunderstanding this information. I would appreciate any insight the community can lend.
I'm running a strain model with millions of elements which takes several hours. Our group has recently started running simulation work on GPU clusters and I thought I might try to take advantage of the new hardware for some of these ABAQUS/Implicit jobs. I've found the results to be quite perplexing in R2018x. I've included all of the timestamp data for each of two cases and was hoping someone might either point me toward why I'm seeing this discrepancy, and/or literature that highlights best practices for job formulations which take advantage of GPU+CPU combination. I was under the impression my job which takes advantage of the gpu would run significantly faster.
I do monitor the gpu using nvidia-smi and it is most certainly being used for parts of the simulation.
job=x cpus=16 double=both int
JOB TIME SUMMARY
USER TIME (SEC) = 2.24008E+05
SYSTEM TIME (SEC) = 12359.
TOTAL CPU TIME (SEC) = 2.36367E+05
WALLCLOCK TIME (SEC) = 20171
job=x cpus=16 gpus=1 double=both int
JOB TIME SUMMARY
USER TIME (SEC) = 1.99401E+05
SYSTEM TIME (SEC) = 18010.
TOTAL CPU TIME (SEC) = 2.17411+05
WALLCLOCK TIME (SEC) = 39364
These two cases are run on the same box. As far as I can tell, the job with the added GPU takes almost twice as long to run based on the wall clock time. It is possible I am misunderstanding this information. I would appreciate any insight the community can lend.