Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Compression of odb results 1

Status
Not open for further replies.

bigmanrunning

Mechanical
Mar 22, 2011
10
I've found that .gz DEFLATE compression algorithm reduces most .odb file sizes by a factor of two or more depending on model size. However, CPFE simulations with 1-2 million DOF sometime push 10k-20k increments in a step and I can I have very large .odb files that can't be further reduced due to memory usage during the simulation and the number of variables I need for post processing. These .odb files aren't terrible alone, but when doing studies with 100+ of these simulations the storage of the data becomes a pain.

I'm sure I'm not the only one with this issue. Does anyone have a Python code or know of a method for retroactively rewriting the .odb files with some stride and perhaps dropping select data?
 
Replies continue below

Recommended for you

Hi,

There is a C++ script included with the abaqus installation called odbFilter. If you create a duplicate of your .odb by running a datacheck you can then use odbFilter to copy results from specific frames from the large .odb to the duplicate .odb.

It is described in the abaqus scripting users guide:

Section 10.15.4 Decreasing the amount of data in an output database by retaining data at specific frames

Often I will run analyses and save data from multiple frames in each step for post-processing purposes. Later on, when archiving the files, I will then duplicate the .odb and use odbFilter to copy the results from the final frame in each step to the duplicate. This reduces the size of the .odb significantly.

Hope this helps,
Dave
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor