Hyjal
Mechanical
- Apr 20, 2012
- 2
Hi all,
I have a really annoying problem I can't solve. The idea is this: I simulate something and then a .py file reads the ODB, which consists of a field output of frame [-1] (last) of some node set I defined. The node set is an assembly node set that contains nodes of instances of many different (!) parts. As I define the set in the assembly part of my input file, I define it sequentially in the order of how i need to have the nodes in my output.dat file. So the MATLAB code that generates the .inp looks something like:
% --- boundary points ---
for i=1:15
% odd points
fprintf(fid,['*Nset, nset=BOUNDARYPOINTS, instance=BC_ODD' num2str(i) '-1, generate\n']);
% add points to set
fprintf(fid,[num2str(1) ', ' num2str(7) ', ' num2str(1) '\n']);
end
After reading out the data via .py-file, using something like:
odb = odbAccess.openOdb('simulation.odb',readOnly=True)
u = odb.steps['Step-1'].frames[-1].fieldOutputs['COORD'].values
f = file('output.dat','w+')
for i in u:
f.write('%f,%f,%f\n' % (i.data[0], i.data[1], i.data[2]))
... the order of the nodes are totally randomized (the is some minor logic left, but not much), which is very bad in this case, since the simulated nodal coordinates are going to be used in a batch process to generate bezier polygons - so oder is crucial for automated processes.
Can anyone help me if either:
1) there is a way of enforcing the internal numbering of a node set, so that it remains invariant w.r. to the output db
2) there is a clever way to tinker the .py file above, to force to read only certain sets and then make a sequence of sets (which problably have to be declared at *Output...., specifically, unless I want everything in my ODB)
any hint would be fantastic,
thanks a lot.
Hyjal
I have a really annoying problem I can't solve. The idea is this: I simulate something and then a .py file reads the ODB, which consists of a field output of frame [-1] (last) of some node set I defined. The node set is an assembly node set that contains nodes of instances of many different (!) parts. As I define the set in the assembly part of my input file, I define it sequentially in the order of how i need to have the nodes in my output.dat file. So the MATLAB code that generates the .inp looks something like:
% --- boundary points ---
for i=1:15
% odd points
fprintf(fid,['*Nset, nset=BOUNDARYPOINTS, instance=BC_ODD' num2str(i) '-1, generate\n']);
% add points to set
fprintf(fid,[num2str(1) ', ' num2str(7) ', ' num2str(1) '\n']);
end
After reading out the data via .py-file, using something like:
odb = odbAccess.openOdb('simulation.odb',readOnly=True)
u = odb.steps['Step-1'].frames[-1].fieldOutputs['COORD'].values
f = file('output.dat','w+')
for i in u:
f.write('%f,%f,%f\n' % (i.data[0], i.data[1], i.data[2]))
... the order of the nodes are totally randomized (the is some minor logic left, but not much), which is very bad in this case, since the simulated nodal coordinates are going to be used in a batch process to generate bezier polygons - so oder is crucial for automated processes.
Can anyone help me if either:
1) there is a way of enforcing the internal numbering of a node set, so that it remains invariant w.r. to the output db
2) there is a clever way to tinker the .py file above, to force to read only certain sets and then make a sequence of sets (which problably have to be declared at *Output...., specifically, unless I want everything in my ODB)
any hint would be fantastic,
thanks a lot.
Hyjal