StrEng007
Structural
- Aug 22, 2014
- 543
There is a theoretical concept I'm trying to straighten out, which came up when I was looking at the stress vs. strain curves for concrete cylinder compression tests. Before I get to that question, I'd like to briefly touch upon the diagrams we typically see for tensile tests.
First off, I understand the reasoning behind engineers using an "Engineering" stress vs. strain approach to the diagrams as opposed to the true stress vs. strain. It does greatly simply the analysis when we consider the original areas and original lengths in our calculations. However, the part of these typical "engineering" curves that are counter intuitive to me... are the locations where necking and fracture is occurring. For instance, consider ductile steel materials. Many texts will reference, with the idea of engineering s/s in mind, that the breaking strength of a material is less than the ultimate strength. However, when you consider that the material is going to decrease in actual cross sectional area, the stress at the point fracture will indeed be much higher as indicated in a "true" stress vs. strain diagram.
The reasoning I've come to understand is that, the engineering curves always represent the behavior with the original area and length in mind. Therefore, if the true area is decreasing, while the length is increasing, while the stress is increasing... we compensate for it by always considering the original geometry. Doing so gives a graphical image that the stress decreases prior to fracture, when it actually doesn't. *Unless the test unloads the specimen in relation to constant strain, then that would make sense*
I've read many resources online but haven't found one yet that can "simply state" (simply and elegantly) why the engineering stress strain curve drops off before fracture. I suppose for the tensile applications, it doesn't even matter because we use values back at the ultimate strength point of the curve. So there is no use getting all bothered about it.... But here's the situation that came up for me:
If you look at a compression stress vs. strain diagram for concrete, you'll notice the same drop off at a strain roughly equal to 0.003. With that being said, the inverse of the tension test is happening here... the concrete is shorting and by Poisson's ratio the area is increasing. Yet, we see the same drop off? If the same idea as the tensile test is applied, wouldn't adjusting the areas spike the stress? I'm not sure if these tests, tension or compression, adjust the test force to keep the strain constant through the process, meaning they actually decrease the force as the materials are entering their breaking/rupture states. This would explain why we see that drop off for concrete.
The reason why I bring this up is because we use a concrete strain at 0.003, and not 0.002 where the ultimate strength of concrete is typically derived from. In order to consider the 0.003 strain, I want to clearly understand what the stress is doing. Why not use 0.002 for the strain of concrete as it is more closely related to the ultimate strength?
First off, I understand the reasoning behind engineers using an "Engineering" stress vs. strain approach to the diagrams as opposed to the true stress vs. strain. It does greatly simply the analysis when we consider the original areas and original lengths in our calculations. However, the part of these typical "engineering" curves that are counter intuitive to me... are the locations where necking and fracture is occurring. For instance, consider ductile steel materials. Many texts will reference, with the idea of engineering s/s in mind, that the breaking strength of a material is less than the ultimate strength. However, when you consider that the material is going to decrease in actual cross sectional area, the stress at the point fracture will indeed be much higher as indicated in a "true" stress vs. strain diagram.
The reasoning I've come to understand is that, the engineering curves always represent the behavior with the original area and length in mind. Therefore, if the true area is decreasing, while the length is increasing, while the stress is increasing... we compensate for it by always considering the original geometry. Doing so gives a graphical image that the stress decreases prior to fracture, when it actually doesn't. *Unless the test unloads the specimen in relation to constant strain, then that would make sense*
I've read many resources online but haven't found one yet that can "simply state" (simply and elegantly) why the engineering stress strain curve drops off before fracture. I suppose for the tensile applications, it doesn't even matter because we use values back at the ultimate strength point of the curve. So there is no use getting all bothered about it.... But here's the situation that came up for me:
If you look at a compression stress vs. strain diagram for concrete, you'll notice the same drop off at a strain roughly equal to 0.003. With that being said, the inverse of the tension test is happening here... the concrete is shorting and by Poisson's ratio the area is increasing. Yet, we see the same drop off? If the same idea as the tensile test is applied, wouldn't adjusting the areas spike the stress? I'm not sure if these tests, tension or compression, adjust the test force to keep the strain constant through the process, meaning they actually decrease the force as the materials are entering their breaking/rupture states. This would explain why we see that drop off for concrete.
The reason why I bring this up is because we use a concrete strain at 0.003, and not 0.002 where the ultimate strength of concrete is typically derived from. In order to consider the 0.003 strain, I want to clearly understand what the stress is doing. Why not use 0.002 for the strain of concrete as it is more closely related to the ultimate strength?