bigmanrunning
Mechanical
- Mar 22, 2011
- 10
Hi Guys,
I have a number of SDV values that I integrate over the volume of a specimen for each time increment. The simulations I'm running can have a few hundred time increments and the arrays for each SDV value has about 0.5 million elements. I utilize numPy for all calculations, which is fast, but most of the script time is taken up reading values from odb to numpy arrays. Slice array transfers aren't supported for odb access, so I am curious if anyone has found an efficient method of reading large amounts of data from odb to arrays?
The way I read from odb currently is:
for i in range(1,frameLen,strid):
# Set the current frame and time values #
currentFrame = odb.steps[step1.name].frames
PMicroValues = currentFrame.fieldOutputs['SDV506'].values
# Initialize in the first step #
if i == 1:
length = len(currentFrame.fieldOutputs['SDV506'].values)
PMicroData = np.empty(length)
for j in range(0,length):
PMicroData[j] = PMicroValues[j].data
I have a number of SDV values that I integrate over the volume of a specimen for each time increment. The simulations I'm running can have a few hundred time increments and the arrays for each SDV value has about 0.5 million elements. I utilize numPy for all calculations, which is fast, but most of the script time is taken up reading values from odb to numpy arrays. Slice array transfers aren't supported for odb access, so I am curious if anyone has found an efficient method of reading large amounts of data from odb to arrays?
The way I read from odb currently is:
for i in range(1,frameLen,strid):
# Set the current frame and time values #
currentFrame = odb.steps[step1.name].frames
PMicroValues = currentFrame.fieldOutputs['SDV506'].values
# Initialize in the first step #
if i == 1:
length = len(currentFrame.fieldOutputs['SDV506'].values)
PMicroData = np.empty(length)
for j in range(0,length):
PMicroData[j] = PMicroValues[j].data