Dear jhml
My thoughts are that the NECESSARY quality assurance issues should encapsulate the modelling that has been carried out. This should them be supplemented by a modelling file which corss-references the QA and computer files. I quote from a publication of mine (in press) the principles of what the QA documentation should contain
" By now I hope that you are convinced that some sort of QA should be carried out for all analyses. My view is that the following stages should be adopted:
· Pre-analysis definition that should be independently
assessed and approved. This should cover:
o A specific and unambiguous statement of the objective
of the analysis. General statements like “to find the
stresses in the shaft” are not good enough. The
objective should be linked to the overall project of
which the FEA will be just a small part.
o Critical assessment of the overall approach. Is FEA
the appropriate technique? Are the modelling
simplifications proposed suitable, e.g. 2D?
o A statement of what analysis results are needed and
how they are to be used. Again be specific and link
back to the objective. Do you need absolute values of
the results for use in a design code, e.g. BS5500 or
is prediction of trends or relative improvement
adequate?
o For complex analyses the model should be built and
tested in stages. Define milestone analyses so they
can be tested. Don’t get to the end of a complex and
expensive analysis only to find out it does not work!
· On-going checks. These are vital to make sure the
analysis is on-track.
o If milestones have been defined use these.
o Check the results against independent data such as
theoretical solutions or experimental results. See
also §1.10.3
o Be willing to modify your analysis plan. It is rare
that building, executing and analysing a model goes
exactly to plan as originally devised.
· Post-Analysis validation, approval or sign off. Like the
first stage, someone independent of the analysis detail
should carry this out.
o The analysis should be checked against the original plan.
What deviations have been made and why?
o Do the results make sense and do they agree with the
validation, if not why not? At this stage the reviewer
should ‘dig around’ to find problems and in particular
should ask ‘stupid questions’ about the analysis.
o Are the main results what were needed as defined in the
pre-analysis stage?
o Finally will or do the analysis results meet the project
objective?
The above bullet points are rather general but they should be appropriate to most analyses. The detail and formality of the procedures to implement the above will need to be tailored to your products, company and industry. Critical components and/or products where the potential cost of getting the analysis wrong is high (i.e. in human or monetary terms) will need much more stringent procedures than non-critical parts or components. "
This approach enables the record keeping to be done as part of the QA, which should save time. Independent review should help keep the QA and records in good order. I hope that this helps.
TERRY
TERRY
![[pc2] [pc2] [pc2]](/data/assets/smilies/pc2.gif)