Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations SSS148 on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

The dangers of software and code changes 6

Eng16080

Structural
Jun 16, 2020
904
I use WoodWorks Sizer for sizing most wood members. I must have installed an update recently or inadvertently changed program settings because I just noticed the default code is set to ASCE 7-22 and not ASCE7-16 therefore using the snow load combo: D+0.7S rather than D+S. Fortunately this came to light while manually checking a beam calc. and noticing the end reactions were off.

I realize this is ultimately my error, but it makes me wonder how often errors like this occur, and if the code writers realize the potential problems caused by messing with these loads seemingly every other code cycle. I'm sure there are software users who wouldn't suspect any great harm in using the newest code in the analysis. (I'm not necessarily defending them.)

Sometimes I feel like it would be safer to write my own software for some of this stuff and just lock it to the codes I'm currently using (ASCE 7-16, etc.) and then use these same codes for the next 20 years or so (until I retire). Maybe it's not a perfect approach, but I doubt I'd ever be more incorrect than I was today due to the rather large difference between 0.7S and S.

I don't really have a question here, but wanted to mention today's screw up in the hopes that somebody else might avoid the same error. I always try to be careful but this one certainly caught me off guard.
 
Replies continue below

Recommended for you

Are you guys actually getting significantly different designs with the new combinations?
 
Are you guys actually getting significantly different designs with the new combinations?

The loads are supposed to go up and the load combination factor goes down. In my area (a special snow region) we increased the loads by 1/0.7....
There is no effect on the design due to that. I think the 'danger' is using the old snow maps with new load combinations.
 
Really? You know a layperson who understands reliability theory? Really?
No theory required. Just, “15 in 10k code-compliant bldgs fail over a 50-year period.” It’s enough to get people thinking in terms of probabilities.

We already have the Beaufort scale and the Enhanced Fujita scale to relate wind speeds to building damage and “how it feels” to someone standing outside.
 
@JoshPlumSE I like the restaurant analogy. I might try to find a way to work that in. I think RWW0002's concern was more with communication through documentation - an owner gets a set of drawings and it says the building was designed for 120mph wind load and a 57psf ground snow load, but then goes back and looks at the building he had built 15 years ago and it was designed for a 100mph wind load and a 20psf ground snow load. I can see where that could get confusing, especially if they choose not to ask for clarification or just go with what their contractor tells them: "just those dumb engineers changing things again. not that it matters. Look at this! doesn't even move!" (as the contractor pushes on a frame designed to take a 10k lateral load with his hand).

The more of I've thought of it, the more I agree with @RWW0002 that this is certainly a problem, but I don't think we agree on the solution. I don't want to go back to oversimplified factoring when we have the data to tailor our loading determinations in a more probabilistic way, but we do need to find a better way to communicate to the public what it is we're doing. Granted, I'm not in earthquake territory, but a lot of people assume we design all buildings to be fully functional after an earthquake. That typically happens here because ultimate EQ loads are less than service wind loads in most cases, but they're shocked when I tell them that anything less important than a hospital or fire station is likely to suffer damage and may have to be torn down - the overriding design philosophy for most buildings is to stand long enough to escape and then be torn down safely.

@ANE91 That is useful, for sure...but I think the question is more of what can I expect it to do, not what are the odds of it not doing it.
 
I don't want to go back to oversimplified factoring when we have the data to tailor our loading determinations in a more probabilistic way
Agreed entirely - but there seemed to be no reason to just not keep the 1.6 factor to scale up from an allowable/service wind pressure.

South Florida has a NOA approval system for exterior enclosures. Windows, wall assemblies, etc are required to go through a pretty rigorous approval process to allow for installation in Miami-Dade and Broward County. These NOA's list the ASD/service pressures within their notice, even for products that are in a ASCE 7-10 and beyond code cycle. There is nothing within the approval process that dictates these pressures are allowable - and it is a pretty significant hole in the entire process. Needlessly complicated.

When I was designing my first project out of school - I ended up sizing my lateral system on the project by double counting the 1.6 factor. ASCE 7-10 had just become the governing standard, but the typical load combination for the accompanying ACI code still utilized 1.6*W. ETABs automatically created load combinations based on the ACI code chosen. Learned a big lesson in trusting computer software outputs. To my luck, this was a conservative mistake and I was able to catch it before the project was submitted and I was able to walk back some of the additional robustness of my lateral system. I am familiar with a high rise project in South Florida which was submitted for permitting, and during a structural peer review it was determined that the building was designed with the equivalent of an allowable/service wind pressure as if if was ultimate. This occurred during a similar time period, and led to a really hard time for the EOR to provide a resolution. All once again needlessly complicated code changes which impact practicing engineers.

For those who haven't yet been impact by ACI 318-19 - good luck. Every single renovation project that increases loads beyond I-EBC allowances now likely have some kind of concrete shear deficiency.

Generally, I think the genie is out of the bag when it comes to codes getting more complex. I do wish that the different standards, building codes and AHJ's would come together and provide us with some consistency across the country. Let's get the whole country and all standards on a 6-year cycle. This allows for better planning for practicing engineers.
 
EZBuilding: Agreed entirely - but there seemed to be no reason to just not keep the 1.6 factor to scale up from an allowable/service wind pressure.
Yeah, I never really understood this either. We did the same thing with Earthquake loads back in the 90's. That made a little more sense because we were factoring an "event" level force down to a service level force then scaling it back up to a strength level force.

Maybe the wind guys wanted to be consistent with the strength level seismic forces. That way it is easier to compare wind and seismic forces and see which one is going to control the design. I feel like this may be more important for wood structures when you're trying to compare nailing requirements and such.
 
@JoshPlumSE : yes, it's about getting to strength level, but it's also about consistent reliability and probabilities of failure across structures for various events. Unlike earthquakes, that are driven by (mostly) known fault lines with reasonable expectations of maximum energy release, hurricanes and wind events are far more varied in locations and strengths.

So for wind design for various risk categories, you change the mean recurrence interval for the predicted wind speed in order to establish the reliability ratings that ANE91 quoted above. But the wind maps aren't the same for each one. Here's an overlay of the 4 to show that the contours don't align. So no use of a simple factor - load or importance - will capture these variations over different risk categories. Saying "we'll design this fire station to be able to resist 50% more wind force" doesn't do anything to tell you about how reliable that building is going to be when compared with the 'likely' wind loads.

1745520977894.png
 
I don't want to go back to oversimplified factoring when we have the data to tailor our loading determinations in a more probabilistic way, but we do need to find a better way to communicate to the public what it is we're doing.

Agreed entirely - but there seemed to be no reason to just not keep the 1.6 factor to scale up from an allowable/service wind pressure.
I wish I knew enough about the probabilistic determination to know if it is feasible/practical to "factor down" design criteria/maps to service level as I am proposing without sacrificing the overall method (Similar to the way we currently factor down loads for ASD designs) and maintain the "old" load factors?

I have to admit it has been a (long) while since I have dug into reliability theory or probabilistic analysis, and it is not an area I enjoy. However, I do see the merits of the approach, and understand the need/desire to evaluate loading at ultimate level. However, I maintain that loading should be communicated at service level. The easiest way to accomplish this, it seems to me, is to present service-level design criteria and factor loading as appropriately for ultimate-level events, not the other way around.

FWIW, I would consider seismic to be an exception to this as there is not really as much of a general understanding in the public of seismic event scale, mechanisms, or general seismic load theory like there seems to be with gravity and wind loading.
 
Last edited:
First, I am in the ASD camp for the best reason I know of, it was what I learned first and feel the most comfortable with. The LRFD/USD methodology is understandable to me and I know how to apply it to concrete and steel but never learned to apply it to wood. I have never understood why ASD could not have been slightly modified to give similar results. Concrete somewhat started this but that was understandable because concrete has a high DL in terms of beam or column weight, not in terms of density. It is also a less reliable material than steel and probable less reliable than wood. The explanation I heard 45 years ago was the concept that DL was more predictable than LL and that led to the 1.4 versus 1.7 factors. The were originally called "overload factors". If this was true, why not just keep ASD and change the LLs to more than 40 psf (say 50 psf) for residential FLL and more than 100 psf (say 130 psf) for churches/restaurant etc.

Over the years, it has become a lot more statistics, probability, reliability and for me, also confusability. I have given up trying to understand wind and am thankful I don't live in a high seismic area.

Here is where I gave up on understanding wind. The current combos have 1.0 factors. I was told the wind map now factors up the wind to account for the older 1.6 factor. To me that is a mistake, factor everything or factor nothing but don't jump back and forth. In 2020 Hurricane Laura and in 2021 Hurricane Ida were 150 mph (1-minute wind speed) at landfall. I am told 3-second gusts are higher than 1-minute sustained values. So, I imagine the 3-second speed was greater than 150 mph. The ASCE 7-22 Risk Cat II map shows those locations to be between 140 and 160 mph at a 3-second gust. Where is the "overload factor"? That would be closer to the actual wind speed not one that has a Pressure that is 1.6 times higher. The factors should be reflected in the pressure, not the speed since it is not linear. Where was I steered wrong in my beliefs of wind? Where I am incorrect in my thinking if you are familiar with wind? I would really like to know. I can follow the calculations, not the logic.

As far as explaining to the average person, you have to first clear up "sales misconceptions". I have heard numerous times something like "the contractor told me their building was designed to withstand 170 mph wind". I have to explain that it may not mean, "safely withstand", it could mean it is about to go.

I do apply LRFD to lifting weights, I lift a 100 lbs and tell everybody I lifted 170.
 
First, I am in the ASD camp for the best reason I know of...
It seems to me every discussion around this mostly includes people that fall into 2 camps
1. More accurate/consistent is better
-Probabilistic analysis the best.
-Generally these are LRFD-folks (after all it is based on more of a probabilistic approach) and trend towards the academics - or at least come across that way
-Point out inconsistency in past reliability
-Push new more rigorous and more complex loading models in order to get more reliable and consistent designs and probability of failures
-ok with "new" methods if they are better/more accurate

2. ASD - simpler is better guys
-Generally ASD and likely "Green book" folks
-KISS and simple to apply and communicate design provisions are better
-Point to lack of significant difference in most designs due to the ever-complex codes and models
-Point out (as I have above) that confusing and ever changing loads can be problematic
-Would rather stick with "old" methods unless they are unsafe or result in significantly more efficient designs

What I have been attempting is trying to split the difference in these two camps a bit without digging up too many old battles between ASD and LRFD. Acknowledging the merits of new load determination while allowing it to be as practical as possible. It seems like most of the ASD folks, including myself, bow out of the discussion once the "probabilistic analysis" buzzwords get thrown out there and feel like LRFD and code complexity is being shoved down their throat.
Likewise those in camp 1 seem to have determined that ultimate-level loading is the only way to go (both in load development, analysis, and communication) and there is very little "give" when it comes to anything at "service level"..
 
I have never understood why ASD could not have been slightly modified to give similar results.
Steel did this nicely. There's no allowable stress design in steel anymore - just allowable strength design. It's based on the LRFD but modified down to something engineers who have been doing it forever are more accustomed to. It also makes it easier when using it within wood structures. I've never tried to use LRFD with wood. As a natural and highly variable material, this is an area where allowable stress design makes sense. As I understand it, the LRFD design for wood is the opposite of ASD for steel - it's just allowable stress modified to bring it up to work with LRFD load combinations.

Hurricanes throw another curve ball in the confusion tangle. There are actually two sets of maps. One is based on recorded data, the other is based on predictive hurricane models. Then they overlay them and you sort of get the worst of the two for each location. Based strictly on recorded data, a lot of areas along the coast would have pretty low wind speeds. But just because we don't recorded wind speeds of a hurricane hitting a particular town doesn't mean it won't happen.

A quick search shows that the highest gust recorded in Ida was 172mph. So for risk category 2 buildings that saw that, they would have cutting into their safety factors to stay standing and likely would have seen some damage. Risk Category 3 or 4 structures, on the other hand, should have been designed to remain operational during a storm and would have used a higher wind speed for their design - because while the odds of a storm that strong occurring are low enough to not worry about it directly for a single family house, they are not long enough to gamble access to medical care and first responders' abilities to serve the community or chemical plants to not explode.
 
It seems to me every discussion around this mostly includes people that fall into 2 camps
1. More accurate/consistent is better
-Probabilistic analysis the best.
-Generally these are LRFD-folks (after all it is based on more of a probabilistic approach) and trend towards the academics - or at least come across that way
-Point out inconsistency in past reliability
-Push new more rigorous and more complex loading models in order to get more reliable and consistent designs and probability of failures
-ok with "new" methods if they are better/more accurate

2. ASD - simpler is better guys
-Generally ASD and likely "Green book" folks
-KISS and simple to apply and communicate design provisions are better
-Point to lack of significant difference in most designs due to the ever-complex codes and models
-Point out (as I have above) that confusing and ever changing loads can be problematic
-Would rather stick with "old" methods unless they are unsafe or result in significantly more efficient designs

What I have been attempting is trying to split the difference in these two camps a bit without digging up too many old battles between ASD and LRFD. Acknowledging the merits of new load determination while allowing it to be as practical as possible. It seems like most of the ASD folks, including myself, bow out of the discussion once the "probabilistic analysis" buzzwords get thrown out there and feel like LRFD and code complexity is being shoved down their throat.
Likewise those in camp 1 seem to have determined that ultimate-level loading is the only way to go (both in load development, analysis, and communication) and there is very little "give" when it comes to anything at "service level"..
I would assume many in Camp 1 likely use expensive software to do complete designs so the computer does the code crunching. I'm in Camp 2 and am here to make money and my projects don't require anything like that - so the simpler the better. Also, in light framed construction, serviceability tends to control most things so might as well keep the numbers consistent.
 
I'm in the split camp. Like XR, I want to make money, but I would prefer to do it using the best data and methods we can devise.
 
So for risk category 2 buildings that saw that, they would have cutting into their safety factors to stay standing and likely would have seen some damage.
That is the part that confuses me. Rather than ASD, use real loads and design to an allowable stress that is less than failure level stress, we are upping the loads 40% to 70% and designing to theoretical failure. Where is the upping 60% on wind, if the wind map says 150 and the hurricane was 150 or more? I could understand the map saying 210 mph but never seeing more than about 170 but that is not what I am seeing. I am not seeing the factored-up wind load. What am I missing? I do not see the Factor of Safety at all for wind. Is the factor of safety in the 33' above the ground measurement, is it in the coefficients, or some combo I am not aware of? Its a game of "Where's Waldo" I can't seem to win.

The only thing left is that the "safe design" is intended to be for a wind load that is MUCH less than 150 mph, like 115 mph on the hurricane coast. The 150 mph is the 60% overload. If that is the reason, then the probability concept really sucks unless you live inland.
 
Last edited:
I am not seeing the factored-up wind load. What am I missing?
The previous factored up wind load was based on a 50 year mean recurrence interval. So, statistically, we designed the building for the wind we expected to see once during the service life of the structure. Remember, when we did that, wind was 1.0 for the ASD load combinations. 1.6 was for LRFD. And that's fine, as long as God gets the memo to only throw that storm at a building once in its life, and never anything stronger. That's not quite how it works, though.

So now, we look at the a 300 year MRI for risk category 1 buildings, 700 year MRI for risk category 2 buildings, 1700 year MRI for 3, and 3000 year MRI for 4 (as of 7-16, anyway). Since the statistically averaged time between events has gone up by from anywhere from 6x to 60x (and the storm intensity along with it, though not linearly), they are the "ultimate" load for design. So now rather than factoring up to LRFD, we factor down to ASD.

And calling the old method "service" level loads is misleading. Service level wind loads are 10 year MRI, not 50, unless you're looking at a critical or sensitive structure that needs stricter limits or better performance over a wider range of conditions. In the past, this was approximated by multiplying the 50 year MRI wind load by 0.75, and can be further approximated with a 700 year MRI by multiplying it by 0.42. Or you can use the maps for 10 year MRI and get a more accurate look at it.
 
What I have done to avoid the confusion, in my mind the load combinations become D+0.6W, etc. and D+0.7E, etc.
 
And that's fine, as long as God gets the memo
Thanks for the clarity. Unfortunately, your clarity adds to the confusion, but I do understand your explanation. Now we factor up sometimes, but factor down other times. Hmmmm, ASD looking better and better. ASD is one direction and there are not many "factors for other factors".

And God definitely did not get the memo on hurricanes just like he did not get the memo that all the 1 for 1 bets in roulette are supposed to hit 47% of the time. Practically every other spin, but NOOOOOOOOO let me bet on them, Black, 20 times in row when I am betting Red, Odd, 20 times in a row when I am betting Even.
 
I'll admit I don't have a deep understanding of LRFD (like some here). However, if LRFD is truly the more accurate/consistent method, I assume the difference that we're talking compared to ASD is still miniscule. I don't recall ever seeing a design example using both approaches resulting in appreciable different solutions. If ASD were an inferior method, then I assume there simply wouldn't be an ASD option, yet every material excluding concrete seems to allow for both methods.

If LRFD were more accurate by let's say 5% (I don't really know, just making this up), based on reading some of the posts here, it seems this small benefit would be more than outweighed due to overall code complexity/confusion and the resulting human error (which I doubt the LRFD folks are factoring into their statistical model). Also, is anybody outside the PEMB and prefab truss industries optimizing their structures to be within 5% of utilization anyway? I've seen designs like this which are a total mess where floor plans have 30 different steel beam sections and zero repeatability.

For me, I just want simple methods that are consistent. If something more confusing is only slightly more accurate, then it's not better. Even NASA engineers have screwed up rather badly due to mixing different systems.
 
Now we factor up sometimes, but factor down other times.
But that's how it was before, too. We factored down for serviceability (0.75), didn't factor for ASD, and factored up for LRFD (1.6).

Now we factor down a lot for serviceability (0.42, or do a separate wind calc if it makes sense to do so), factor down for ASD (0.6), and don't factor for LRFD. So...actually...we only factor down now ;)

We're really not doing any more or less fiddling with the numbers, we're just fiddling a little differently.
 
Last edited:
It seems to me every discussion around this mostly includes people that fall into 2 camps

I see LRFD as dressed up ASD. Strip off the statistical costume and it’s largely the same safety buffer for most structures. ASD was less overt about the reliability stats, but the reliability was baked in all the same. People forget LRFD was designed to mirror ASD reliability for most cases. They’re not fundamentally different animals.
 

Part and Inventory Search

Sponsor