Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations IDS on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Artificial Intelligence in Structural Engineering 5

I've successfully used it to quickly generate complex, multi-layered if formulae in excel. Just kept feeding it a word salad of what I wanted and we eventually got it tailored to work perfectly. Done in 5 minutes what would have taken me an hour to organize the logic in excel myself.
What's "it"? Several programs have been mentioned. LOL
 
"If it sounds too good to be true - it probably is." I think we'll be seeing a lot of advertisements touting that AI software will make structural engineering a profession where designers need to do little more that push a few buttons to design a structure. (Hopefully it won't be that easy. Because if it is, then structural engineers will be making salaries on par the money made by burger flippers at MacDonald's.) AI software will be powerful in the hands of experienced engineers. It will dangerous giving it to inexperienced engineers. Call me cynical, but I expect AI software will be given primarily to inexperienced engineers. If you can't manually design a structure without a computer, you should not be designing one with a computer. (Although computers greatly enhance the productivity of experienced engineers.)
 
My concern is for the long term state of technical knowledge in the profession. For decades, firms have employed and developed a lot of new guys. That's a big part of how knowledge is spread down the line. There's probably a critical amount of new blood that's required to keep the profession going. If these tools cause a 5x drop in the number of required new guys, that might cause us to dip below that critical amount.

Currently, some percentage of knowledge is held by engineers, some by academics, and some by programmers. I could see that balance changing, so almost all technical knowledge is held by academics, programmers, and AGI. I don't know what terrible thing would happen, but that doesn't sound good.

This isn't like manual drafters going extinct. Manual drafters were never the keepers of the technical knowledge.
 
Last edited:
On the Biology of a Large Language Model was just recently published., wherein the authors from Anthropic were able to track the internal processing of Claude 3.5, which is a large language model (LLM), similar to ChatGPT, etc. The bottom line is that LLMs, which is the current state of the art of AI, does no reasoning, although the process is more complicated than simple prediction of the most likely answer, based on the tokens it processed. Therefore, the LLM paradigm does no "thinking" and is solely dependent on what it was trained on.

As for the technical knowledge question, if the tool is sufficiently effective and reliable, then, sure, some knowledge will be lost, but that's true of a lot of tools that have come into existence in various disciplines. When circuit simulators became acceptable as tools for design integrated circuits, back in the 70's, old school engineers lamented the loss of technical knowledge in future engineers, and yet, 50 years later, we're designing integrated circuits with billions and billions of transistors, which would not be possible if every subcircuit had to be analyzed by hand. For that matter, the physical layout of each subcircuit is automatically designed, and several generations of circuit layout designers have lost their professions and had to find other work.
 
The bottom line is that LLMs, which is the current state of the art of AI, does no reasoning

Saying LLMs “do no reasoning” isn’t accurate. They don’t use simple logical operations, but they consistently handle tasks that clearly require what we’d call reasoning in any other context, e.g. logic problems, multi-step calculations, complex code, passing the medical exams. It’s not doing it via symbolic logic; it’s statistical inference that produces similar outcomes, often with a level of reliability that surpasses our cherished “logical reasoning.”

If anything, the real takeaway from articles like that is that the internals are alien and messy, we don’t yet understand what’s going on inside these models. We struggle to map them to our familiar tools of understanding. That doesn’t make them dumb. Assuming they’re not reasoning just because we can’t fit their processes into familiar logical frameworks isn’t a useful conclusion. It simply reflects a narrow, parochial view of what constitutes intelligent reasoning.
 
That's a pretty big stretch of the term reasoning - by that definition, any algorithm engages in reasoning.
 
What exactly do you think “reasoning” is? What is it that humans are supposedly doing, that LLMs are not?

What’s the actual distinction you’re drawing?
 
Again, it does predictions of what the most likely answer is supposed to be, so it essentially lies when you ask it how it did the addition problem.

Moreover, it can easily lie with any math problem given even the most minute misdirections. In the example below, it is asked to verify the answer given in the prompt. On the left, it's ability to predict what the answer might be and gets it correct; on the right, since it can't actually do math, it fakes the math and comes up with the wrong answer. So, if it's reasoning, it's doing a lot of BS.

image
 
Again, it does predictions of what the most likely answer is supposed to be

Sure, it predicts the most likely answer, but to do that effectively, it has to build internal structure that reflects the underlying relationships in the problem. That’s the part the “it just predicts the next token” criticism always glosses over. It’s not random guessing, it’s generalisation and abstraction, refined to the point where it can predict the next token. In practice, that’s a form of understanding.

Saying “it’s just predicting the most likely answer” (which is a tired line by now) is like saying a kid who gets 100% on a test doesn’t really understand anything, they’re just “predicting” the right answers. At some point, if the predictions are consistently accurate across novel problems, you have to admit the kid, or the LLM, has learned something meaningful, even if they're not doing it in a way we particularly like.

Does it get some maths problems wrong? Are the answers sometimes biased or “motivated”? Absolutely—just like humans. Reasoning isn’t always clean, even for us. If anything, we’re worse. BS is rife in human reasoning in my experience.
 
What exactly do you think “reasoning” is? What is it that humans are supposedly doing, that LLMs are not?

What’s the actual distinction you’re drawing?
That's a philosophical question. But by your definition, even a calculator does reasoning.
 
There is a concurrent session on AI in structural engineering happening in the ASCE structures congress I believe today or yesterday, I wonder what has been said on the topic there.

I've heard through the grapevine the use of AI to identify deficiencies in bridge infrastructure, the inspector takes a photo and feeds it into a model and it locates areas of concern, kind of like how they are using AI to identify cancer in x-rays. There is also the use of drones to map out these large structures digitally.

It does not seem that far fetched to believe that in the next decade, an engineer could take a camera into a damaged structure, map it out, and have a model locate (even in real time) areas the engineer should pay close attention to, and come up with a crude engineering plan like an xactimate sketch. How much further could it go? Could a model, through a digital map, search through/fabricate relevant repair details or a coherent damage assessment report?
 

Part and Inventory Search

Sponsor