Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How much does using Virtual Memory (Page files) decrease performance?

Status
Not open for further replies.

PWScottIV

Mechanical
Jul 21, 2002
6
0
0
US
So I have 16GB of physical memory, but the analysis I need to run has about 8 million quadratic hybrid tetrahedral elements, which is requiring about 43 GB of space right now. The value peaked at about 70GB, when I think Abaqus was building the B matrix. I have two dual-core 3.1GHz AMD Opteron processors running on Windows XP Pro x64.However, now that I'm using virtual memory, only one processor is pegged (25%), which is being used by a program called "standard.exe" (part of Abaqus).

I submitted the job, which is supposed to give me only basic stress and displacement information, on 11/7/09 at 6:22AM (about 84 hours ago) and it has still not finished. Earlier, I ran a analysis that had 2 million elements and it took up about 18GB of memory and took about 2 hours to complete using all 4 processors.

Right now, under the "processes" tab of the task manager, standard.exe is only using 200k memory, and is maxing out one CPU only. However, under the performance tab, it says that 68.2GB are being used. What's really odd, is that it appears Abaqus isn't using any of my physical memory any more, only virtual memory. I have nine 15,000rpm Seagate Cheetah hard drives that I split the page files onto, hoping that would improve my performance. I wish I had the $4000 it would cost to upgrade to 64GB of RAM, but I don't...

So my question is, is this thing ever going to finish? This week, this month? Should I just give up?

Is there any way to tell how long it might take by knowing how many elements there are?

How much slower should this take using the type of hard drives I have instead of using physical memory?

Does it help or hinder me to split the page files onto different drives?

Is there anything else besides reducing the amount of elements that will make this work?

Thanks in advance for the help!
 
Replies continue below

Recommended for you

Check out the read write speeds for the HD and Ram and that should give a good idea on the slowdown. That will definitely be the weak link compared to CPU.

Is this a linear analysis?
Can you use submodeling?
Splitting the page file should help. Especially getting it off of C.

What on earth are you modeling that you need 8mil elements?
You could send this to a computing farm. Check with your local Abaqus office.

If you are unfortunately on Vista they have a cool feature that it can use USB flash drives as memory. Grab all your friends and plug them in for some performance boost.

I hope this helps.

Rob Stupplebeen
 
If standard.exe is pegged at 25%, but the memory usage listed under the "processes" tab of Windows Task Manager is only 220k, does that mean that it's not even crunching number anymore? Under the Applications tab, my Page File Usage shows 68.3GB is still allocated.

Should I just kill the process or give it a couple more weeks to complete?
 
Check your *.sta to see if any iterations have completed. I would mesh with a much coarser mesh to work out some bugs and start your mesh sensitivity study with 1 million elements or so.

First find out the type of ram you have then you should be able to find the specs. I don't know where off the top of my head.

I usually don't trust task manager very much.
Do you have CPU=4 flagged? The more processors you use the more ram is required so since you are running out of ram the extra CPUs might actually slow you down.

Rob Stupplebeen
 
Ok, so it's weird, bit I don't have an .sta file for this job (I do for older jobs that I've completed though). I have inp, com, log, 023, ipm, lck, 023.1, 023.2, 023.3, cid, dat, msg, sim, mdl, stt, odb, prt, and rpy. Would any of those help? The stt file is over 21GB. Also, I don't know if it matters, but none of these files have been written to since about 24 hours after I submitted the job. I did successfully open the *.odb file, but when I tried to display the results, there weren't any (no big surprise if Abaqus wasn't finished yet)

I already ran the same model with a little over 1 million elements before this one, and I was hoping to get this 8 million element model working to prove convergance and increase the quality of the mesh at key locations. Should I give up and try something more reasonable like 4 million elements.

And you're saying that I should use only one processor if I know I'm going to be utilizing virtual memory because less memory is required?

Thanks for the help!
 
So here's the specs for my RAM and Hard Drives:
RAM:
Standard Name: DDR2-667
Data Transfers Per Second: 667 Million
Peak Transfer Rate: 5333MB/s
Hard Drives:
Model: Seagate Cheetah 15.5k
Interface: Ultra320 SCSI
Data I/O Transfers Per Second: 2150
Peak Transfer Rate: 107 MB/s


Therefore, my hard drives' peak data transfer rate is about 50 times slower and about 300,000 times less I/O transfers per second. Not so good.
 
It looks like you will be gray or grayer before you get results with that model on your setup.

1 million is already a very large model for most applications.

Could you try adaptive remeshing starting from 200K elements and go from there?

Look into the submodeling.

A picture of your model or similar application would help.

As a rule of thumb given to me from my first manager if you zoom to fit and all you see is nodes you are not doing finite element modeling you are doing infinite element modeling.



Rob Stupplebeen
 
Status
Not open for further replies.
Back
Top