Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

python script parallel post-processing

Status
Not open for further replies.

mellejgr

Materials
Feb 4, 2019
8
0
0
GB
Hi everyone,

I am collecting results from a very large .ODB (4GB) using a python script, this is taking me a very long time. I wonder if anyone knows if it is possible to run the following code in parallel or have any suggestions to speed it up? As you can see I already have a node set so as to collect only from the interesting points and am looking at a single frame too.

odb = session.openOdb(name=pathway+job+'.odb')
a = odb.rootAssembly
step = odb.steps['Step-1']

FRAMEOFINTEREST=10

nodeNAME='NODES'+str(nx*ny)
nodeSET = a.nodeSets[nodeNAME]

du=[]
dv=[]
dn=[]

Kind regards,
Melle

frame=step.frames[FRAMEOFINTEREST]
field = frame.fieldOutputs['U'].getSubset(region=nodeSET)

for value in field.values:
n = value.nodeLabel
u,v = value.data

du.append(u)
dv.append(v)
dn.append(n)
 
Replies continue below

Recommended for you

You are extending lists on a large scale and this is something that you shouldn't do in Python. It is getting really slow.
Try to create directly a list of the correct size and then assign the values to the correct index.

In general: I don't know what you intention is, but writing out displacement components could be done much faster with Report -> Field Output.
 
Status
Not open for further replies.
Back
Top