Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

How many elements/nodes is too much to handle? 2

Status
Not open for further replies.

maheshh

Mechanical
Aug 27, 2003
61
0
0
US
My workstation prfile is as follows:
4 cores (each 2.4 GHz)
4 GB RAM
2 HDD, each 300 GB and each at least half empty

My model (and this size will at least double if I want to incorporate all the aspects of the assembly):
NUMBER OF ELEMENT TYPES = 1
291952 ELEMENTS CURRENTLY SELECTED.
444383 NODES CURRENTLY SELECTED.
NUMBER OF SPECIFIED CONSTRAINTS = 2544

This model is currently being solved as linear and is simplified to the best. Once I add all the components of the assembly, I am talking about non-linearities, contact issues, possibly large displacement issues.

Ansys is at times having trouble handling the static/linear problem above. How much can I stretch? Am using the default memory management options. I haven't played around with those.

But I think my workstation is top of the line, and I was hoping I can solve big problems on this machine. Remember I am not trying to get results quick. My intention is to model at most detail - even if it takes 24 hours for the results.

Please advise.

Thanks,
Mahesh
 
Replies continue below

Recommended for you

Mahesh,
You model is larger than typical but not big by any means. Making models unnecessarily large is not wise as it does nothing but consume extra hard disk space. It might be a little larger than a 32 bit OS can handle using the sparse solver. Try issuing the command BCSOPTION before solving with a value such as 3000 MB (providing you have the /3 GB switch activated in XP). Otherwise, I would use the PCG solver for the analysis and it will surely solve your problem.

Another option you may want to employ is submodeling. It's much more efficient storage wise and also relatively quick to do.

Good luck,
-Brian
 
Thanks for the quick reply. I will try to reduce the model size.

In the mean time could you point me to some resource on submodeling that I can refer to? Ansys help does an ok job but not a great one.

Thanks again.
 
Hi,
I got a same problem. My PC can't handle the simulation.I have a idea but don't know how to do it. If my PC is in a local area network,can I combine my PC with others to do the dimulation? If yes, how to do it?

Rock Li
 
Rock Li,
You need a distributed processing license in order to run jobs on more than one machine. For what that cost I'm sure you could afford some pretty nice upgrades to your machine a few times over. If you haven't done so already, try using an iterative solver. Issue the following in solution:

EQSL,PCG

This is a very robust solver for 95%+ of problems. As long as you don't have friction in your model. If you find that you still cannot get it to solve issue the this in addition:

MSAV,ON

Also, it may be worth while having a talk with your manager or IT person to see if you can get more memory for your machine. Memory is becoming ridiculously cheap. You can get 4 GB ECC Reg RAM for under $500. That's extremely cheap compared to the total number of hours lost trying to get a job to run locally.

Good luck,
-Brian

 
Brian
You said in the previous post that PCG works well for 95% of the problems as long as one does not have friction in the problems. Why do you say so?
If it is a contact problem, does PCG affect the accuracy of the problem or has issues with convergence of contact problems when PCG is used to solve?

Thanks in advance.
Mahesh
 
Mahesh,
The PCG solver cannot solve an unsymmetric stiffness matrix which can occur in problems with friction because it is an iterative and not a direct solver.

-Brian
 
Hi,
The techniques in order to reduce a model's occupancy in memory are various:
- submodeling: with this, you build a "general model" which is globally "coarse", then you build a partial model of only the zone of interest, which is as "fine" as possible, and you get BCs for it coming from the displacement field of the general model.
- sub-structuring: with this, you "condense" a part of your model into a matrix (i.e. to a so-called "super-element"). The pilot node(s) for the super-elements are the connection points with the rest of the model (or with another super-element). The only "con" of this technique is that, since the matrix is established "in one shot", and then obviously never changed, the sub-part has necessarily a linear behaviour. So you can not "condense" a region with a contact inside, for example, or this contact will behave linearly.
- Most simply, you can tune the mesh controls in order to have fine mesh only where you need to.

However, in my opinion the first thing to ask is, like the O.P. did, "WHY" the mentioned model is not fitting in memory? As per the info provided, I personally consider it a "medium-"size model. And I don't have a cluster to solve it with TBs of RAM... My workstation has a single processor with 4GB RAM, and WIN XP32 which of course makes the fourth GB of RAM completely unuseful. The /3GB key is set in the boot.ini. The largest model I managed to keep in memory had more than 500000 quadratic elements (including something like 32 contact pairs, but non-linearity doesn't have any implication as regards memory).
So, first of all I'd ask the IT guys to "clean" the O.S. as much as possible, by having only the strictest minimum of "services" started. Reduce the O.S. "base" occupancy as far as you can and you'll immediately see the benefits. Also clean the registry, and once you have it cleared from any unuseful key, export it to file, completely clear the original and re-import it (HAVE IT DONE BY an IT EXPERT !!!).
As regards running ANSYS on several machines: despite what it is declared, you can do it even without a distributed solver license: up to 2 processors, the use of a MPICH system is allowed freely: be aware of the fact that each core of the last generation's processors is seen as an individual "processor" by Ansys, so if it is possible (it WAS with the Pentium 4), set the machine's BIOS in order to disable things like hyper-threading etc. Note that your machine won't be twice slower (possibly, you will see only a very small difference if not nothing...). Make the same on the second machine you want to "parallelize", but note that this one must have an architecture exactly similar to the first one. Install MPICH and set it correctly (see manual).
Then set NUMPROC=2 (this key is obsolete in v.11). Remember: this will work if and only if ANSYS can "see" ONE single processor on each machine, otherwise you're dead... If you have 4 cores I desume you have a dual-processor machine, so you are dead immediately, but I wrote the above in the case you want to build up a dedicated system for ANSYS.
Remember also that not all the ANSYS processes are multi-processor-ready: for example, the PCG solver is, but the pre-conditioning part is not !!! You can notice that because in a 2-cores processor, the pre-conditioning employs 50% of the "total processor" where the iterative solve process employs 100%...

Hope this helps in some way...

Regards
 
I have a question: were I to have a multi-core processor, would ANSYS make use of all of them or would it run on one core only? would a distributed processing licence be needed?
 
Hi,
DonTonino,
1- it would run on one core, unless the NUMPROC is set to 2 (things are a bit different in v.11)
2- you need a Distr Proc Lic only for more than 2 procs

Regards
 
Status
Not open for further replies.
Back
Top