Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Extent of software validation 1

Status
Not open for further replies.

GD_P

Structural
Apr 6, 2018
128
Hello community,

I have question regarding the validation of structure analysis and design software.
As almost all of us use software for structure design.
Do you validate them and up to what extent?

1) Analysis

2) design as per the applicable code

3) Applicability of points 1 & 2 for:

a) Only for simple members i.e., one beam and one column OR

b) 2D / 3D frames with columns, beams, bracings & no of storeys etc.

4) Other tools such as response spectrum analysis, wind load generation tools, time history analysis etc.

I think the validation shall cover all aspects such as mentioned in 1 to 4,

otherwise if analysis is wrong but design is correct, such validation doesn't make any sense.

We use STAAD Pro and thinking to validate it. I just want to know how other engineers handle this?

Any help would be appreciated.


GD_P
 
Replies continue below

Recommended for you

Btw Staad pro already has example models (results are compared to analytical solutions of identical examples from literature) that you can use
 
Many states in the US require that you validate the software you use. It is not an intention to re-create the algorithms in a program, but to assure yourself that the output you are getting is similar to what you would get by hand calculations. With that in mind, a simple validation and documentation of that validation should be adequate. My suggestion is to do a simple beam, a rigid frame, a P-delta analysis and a plate of some sort and compare to hand calcs of the same.
 
I start with very simple models and work my way up. For example, I have been trying to learn to use SAP2000 for steel design. I started with a simply supported beam with a point load. I went from there to one with various lateral bracing setups. Then, to a girder with point loads. Then, to a bay with the program figuring out the load going to the beams and girders. At almost every step, it was doing something I didn't expect, so I had to keep working through models until I came up with how I must use the program to get what I want. If it would've behaved as I expected with the first model or two, I probably would've jumped to the final model with the full bay. (As an aside, it's scary to me that SAP2000 can be around this long, and used by many engineers, and still behave in unexpected ways. I'd hate to think what a design I would get if I just used what comes out of there by default.)

The bottom line is the engineer must be sure it is doing what is intended. The level of effort required depends on how the program behaves.
 
In mu opinion, it is neither practicable nor desirable to attempt a comprehensive validation of any software.

You should always validate the results (unless the results being wrong has no significant adverse consequences). The validation may be as simple as comparing with results for similar structures, to carrying out an independent analysis with different software.

Doug Jenkins
Interactive Design Services
 
GD_P:
Why not have a couple younger engineers in your office, or anyone/everyone for that matter, spend a little time modeling and running a few old, pre-software engineering problems or structures that you handled by other means, and compare the outcomes and final results. Alternatively, on current problems, you model and run your structure, and another person review the model and results, and visa-versa.
 
For something like STAAD or RISA, I run it against hand calculations or textbook examples and compare the results. As Kiltor said, STAAD and RISA and many others already have benchmark problems worked out showing the method.

Ian Riley, PE, SE
Professional Engineer (ME, NH, VT, CT, MA, FL) Structural Engineer (IL)
American Concrete Industries
 
I don't know about the necessity of validating the calculations of design software, but I do think it is imperative to validate the results to ensure that the model was properly constructed and the loads were input correctly. To that end, I must disagree with IDS - checking one 'black box' with another doesn't qualify as a real validation.
 
To that end, I must disagree with IDS - checking one 'black box' with another doesn't qualify as a real validation.

It depends how you do it.

Relying on black books can be just as misleading as relying on a black box.

And you don't have to use computer software as a black box, you can use it to enhance your understanding of how a structure behaves.

Doug Jenkins
Interactive Design Services
 
I agree with what you're saying HotRod but I'd also caveat that you're unlikely to make the same error twice so checking one black box against another does provide double check of input error. You're of course correct that it may not prevent modeling errors or other mistakes of that sort.

And I agree with your overall point that it's far better to check the results against simplified hand calcs, rules of thumb, past projects, etc. as a more accurate way of validating a black box results.

Ian Riley, PE, SE
Professional Engineer (ME, NH, VT, CT, MA, FL) Structural Engineer (IL)
American Concrete Industries
 
Clients are not paying for software validation on each job. You pay money for the software to do math faster than you can by hand. The paid for software owes a duty of care to the intended use and that the math is right by validating it's algorithms.

A company selling software that does the math wrong will not last long as it will be either sued or try to patch it's mistakes which will ruin it's reputation. From my experience software is right 99.9% of the time. The inputs can be wrong but the math is right, therefore you need to validate the inputs are right on every job.

As for that 0.1% that's left, the error is usually given in error code. If not the math hit a glitch somewhere and the output is obviously wrong (this is where common sense and engineering judgement comes in) and a small input adjustment will fix it.

The first stage of site investigation is desktop and it informs the engineer of the anticipated subsurface conditions. By precluding the site investigation the design engineer cannot accept any responsibility for providing a safe and economical design.
 
I'd disagree on one point Geo; I've found errors in software by performing validation checks. I of course submitted these to the software supplier and they were patched promptly; but just because it's something you pay for doesn't mean it's infallible.

Of course we can't practically check everything, nor should we. I check for gross errors or for when I may be exceeding the limitations of the software.

I agree that input errors and not understanding the limitations of the software are far more common sources of design errors and also far more likely to cause issues with the final structure. Thus, I wouldn't suggest it's incorrect for an engineer such as yourself if you choose not to validate software. I simply point out that I have found errors in software before.

Ian Riley, PE, SE
Professional Engineer (ME, NH, VT, CT, MA, FL) Structural Engineer (IL)
American Concrete Industries
 
I will answer your queries as per the points mentioned in your post, plus some more.


1) Analysis
This part is nearly flawless in all self respecting software. This is so, because the formulation is all mathemetical, and computers can handle this part very well.
If there are exceptions, it is well mentioned in the literature, and the software will give a proper warning.

2) design as per the applicable code
This is the most important part that you MUST validate across with at least 5 to 10 cases, first case being a simple case, and then building the next cases on the previous cases.
For example in steel structures., a cantilever column subject to tension, then subject to tension and uniaxial moment, then add biaxial column. then add shear . Then convert the tension to compression and repeat.

3) Applicability of points 1 & 2 for:

a) Only for simple members i.e., one beam and one column OR

b) 2D / 3D frames with columns, beams, bracings & no of storeys etc.

A simple beam will suffice, as the analysis is an agglomeration of analysis of a single beam.

4) Other tools such as response spectrum analysis, wind load generation tools, time history analysis etc.

Nice point, this you have validate by comparing across a solved model. The problem may not be the software per se, but your understanding of what the software expects.
For example for wind load generation, software might say, define area in X direction, your interpretation of this statement must match that in the mind of the software developer.

I think the validation shall cover all aspects such as mentioned in 1 to 4,

otherwise if analysis is wrong but design is correct, such validation doesn't make any sense.

We use STAAD Pro and thinking to validate it. I just want to know how other engineers handle this?

Any help would be appreciated.
GD_P
 
Seems like a waste of time. STAAD is about as good as it gets (IMHO). Any "issues" you will find with it are the same you will have with any software as far as modeling assumptions go.
 
IDS, I wasn't suggesting that formulas or equations out of a book should be trusted exclusively (especially if the engineer doesn't understand the basis of the equation and how each parameter is involved). What I was attempting to convey is that I believe the best way to check a design analysis is with a different approach to the analysis. An FEA should be checked by hand methods or software that utilizes a different type of analysis, not checking one FEA with another FEA - the inputs and output are too similar, resulting in the same erroneous assumptions leading to the same erroneous results. I'm not talking about input typos that would not likely be repeated, but modeling assumptions that most likely would.

A number of years ago, one of the engineers I worked with did a frame analysis for a multi-column bent (bridge pier) using a direct stiffness (matrix) approach, of which I had only a rudimentary understanding. I went through his analysis to understand it better, but I checked it using moment distribution, so that I would not repeat any errors or erroneous assumptions that he had made.
 
"1) Analysis
This part is nearly flawless in all self respecting software. This is so, because the formulation is all mathemetical, and computers can handle this part very well.
If there are exceptions, it is well mentioned in the literature, and the software will give a proper warning."

Utter nonsense. Any serious software I have ever used has a bug list, and some never get fixed in the life of the program.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
I've worked in structural analysis software. I've worked for a company that dedicated a lot of time and budget to validating their software (as part of ISO and NQA procedures). I've also worked for a company that dedicated very little time (because they are a small company a and didn't have much available time or manpower) to validating software.

My thoughts on this topic are the following:
1) The ISO and NQA verification and validation procedures are good in concept. But, they can be really expensive. I think this is overkill for most small to medium sized engineering companies.

2) I should also point out that these procedures don't guarantee quality. Rather they mostly involve a company's ability to document and audit their procedures. I know of one company that used to tout (back in the 1990s) how there program could be trusted because of their ISO or NQA certification. But, at the time they were the most buggy and least trustworthy of the major structural software in the US. They followed their internal procedures, but the problem was that their management didn't really care about quality. And, they didn't really respect their customers either.

3) When I first started working for a structural software company (rhymes with VISA), we didn't have much of a formalized verification process, but the company released great, reliable programs with relatively few bugs. Why? Because the company was very small and everyone who worked there was a very conscientious engineer who worked very hard to make sure whatever they worked on was right. Not just for the cases that you see for 90% of solutions, but for the rare cases as well.

4) It is impossible for an engineer at a small to medium size company to comprehensively validate the software they are using. However, that doesn't mean they should do some basics.

5) Testing one black box program vs another is actually (IMO) a very good idea. As long as they were developed independently of each other the chances that they'd make the same mistakes are very low. This shouldn't be the only testing that is done, but it will simplify the testing for features that are difficult to validate. Also, it frequently teaches the user to understand the differences in the software itself. Why is program A's code check so much larger than program B even though the forces are the same? Because the program default settings for unbraced length may be different, or because the user entered in some values incorrectly in one program.

6) Personally, when you have time, I really enjoy putting together some simple textbook problems to test out a program. Then walking through all the numbers to see where they match or don't match. To me, this is the best way to thoroughly understand a new program or new feature. But, it takes a lot of time. Time that most engineer companies driven by tight schedules just don't have. Most of the time, this doesn't fully validate the program / feature. However, it validates that I am using the program correctly.... Which is the source of most errors that come out of the program.
 
I've used a structural analysis program for about 30+ years (rhymes with LISA :)) and feel that by this time it is dependable.

That said, the more important thing by far is verifying the model each and every time very very carefully.
The engineer's method of modeling reality....through the model itself and the boundary conditions....has a much greater impact than the internal matrices of a well-developed and mature program.

JoshPlum - every seen this screen plot before?

RISA_Plot_oprhar.jpg


Check out Eng-Tips Forum's Policies here:
faq731-376
 
JAE -

If that's RISA-2D, it predates my time with the software. I started using RISA-3D in 1996 around version 2.0. I should note, I was also using SAP90 around the same time.

You probably make the best point. The key is validating the model you created. Most errors in analysis are caused by the user, not the programs themselves! So, create some simple load cases to review that the vertical and gravity loads are following the desired load path, review deflected shapes and reactions and such. Very wise things to do for any model.

Note: One point of disagreement. While the main parts of a program you've used for years can be considered tried and true by experience that doesn't really apply to NEW features of the program. Over the years, I've seen (from both RISA and STAAD) bugs in new features that are shockingly bad. So, while I have a lot of trust in features that have been around for years and don't usually feel the need to validate them, that doesn't extend to the new features that have just been released.

That applies even more so for companies like RISA (which was recently acquired by Nemtschek) or STAAD / RAM back when they had just been acquired by Bentley. Really any company that has undergone a major restructuring. Are the people who had been there for years and you really trusted still around? If not there are likely to be some more bumps in the road with new features than you typically had before the restructuring.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor