Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Use of AI for learning and practicing structural design 3

strawhats2000

Industrial
Jan 23, 2012
32
Hi All,
I will like to get your feedback on the best AI tools or application with AI integrated that can be used to learn and practice Structural Detailing of Structural elements like Slab, Beam, Columns, Staircase, Foundation to BS or EN norms/codes in Reinforced Concrete. According with some preliminary chatting with an AI bot, I was told that Civils.ai, Autodesk Robot Structural Analysis Professional and Tekla Structural Designer does have BS8110 and AI integrated and does contain some comprehensive training materials. I will like to get your advice as to which software is more user-friendly and best for beginner with the less time consuming learning curve and adaptability. Perhaps bordering a high imagination, I was thinking of a program that can import an autocad layout, elevations and elevation and calculate loads, position column, beams etc and even calculate and detail rebars. Is that possible or near possible? Thanks.
 
Replies continue below

Recommended for you

I have just spent two hours trying to get ChapGPT 4.0 to correctly analyse a pinned beam. No. Run away, get your textbooks.
 
How to Use ChatGPT to Ruin Your Legal Career

I am an aircraft technology geek, and I have just asked ChatGPT about the Bristol Centaurus engine.

The Bristol Centaurus was a British radial aircraft engine designed and produced by the Bristol Engine Company in the late 1940s and early 1950s. It was part of the radial engine family and was intended for use in large, high-performance aircraft, primarily military planes.

Key Features:
Type: 18-cylinder, air-cooled radial engine.
Power Output: It produced around 2,000 to 2,300 horsepower (depending on the version).
Displacement: It had a displacement of 49.2 liters (about 3,000 cubic inches).
Configuration: It was an 18-cylinder, two-row radial engine with three banks of six cylinders, arranged in a double-row radial pattern.
Fuel: It used aviation gasoline (avgas) as fuel.
History and Use:
The Centaurus engine was developed to power large aircraft, especially bombers and transport aircraft, during and after World War II.
Its main applications were in post-war aircraft, including the Bristol Beaufighter, Avro Shackleton, and Supermarine Seafang, as well as some experimental aircraft.
Notable Aircraft:
Avro Shackleton: A maritime patrol aircraft, which used Centaurus engines for long-range surveillance missions.
Supermarine Seafang: A naval fighter aircraft that was powered by the Centaurus engine.
Performance and Legacy:
The Bristol Centaurus was known for its smooth operation and reliability. However, by the mid-1950s, it began to be superseded by more modern jet engines.
Despite its relatively short operational lifespan, it was a significant achievement in piston-engine technology and was one of the last successful large radial engines used in military aircraft.
In Summary:
The Bristol Centaurus engine was a powerful and reliable radial engine used during the mid-20th century, most notably in military aircraft. Its large displacement and horsepower made it well-suited for heavy-duty applications like bombers and transport planes, though it eventually gave way to the rise of jet engines in the post-WWII aviation era.
It is fun to go through this thing and pick out mistakes. The engine's displacement was 56 litres. The Avro Shackleton and Supermarine Seafang both used versions of the Rolls Royce Griffon engine. There is no mention of the Centaurus' controversial use of sleeve valves. Also, there is no mention of the awesome Hawker Tempest_II, and Sea Fury single engined fighters.

Would I trust ChatGPT to write a technical article? No way.
 
When it comes to technical problems, these AI* systems are like the annoying guy that won't stop talking at a party.

You give them a base input, and they regurgitate a stream of text based on other examples that semi-sorta-kinda sound similar to what you are doing. But they don't actually know what you are trying to achieve, and even if they did, they wouldn't actually know how to solve it.

That guy at the party that thinks he knows everything about a topic, meanwhile there's someone sitting quietly off to the side who is an ACTUAL expert, can sit there and see one thing after another that is wrong, even if the other people standing there may not realize it.

These "AI" systems will always give you a response, so if they don't know, they start filling in the gaps with random trash.

As the engineer, it would be up to you to verify every single piece of info that it gives you. That said, even if you are trying to verify, it still wouldn't be that hard for it to lead you astray without you realizing it. You are better off starting from scratch on your own. Mechanics always talk about hating having to fix a vehicle after someone else already tried. If it comes to you first, you know what you have and what you don't have. If someone else touched it first, it's hard to tell what you are actually working with and what is the left over mess from the last guy.

*The issue with these "artificial intelligence" systems is that they aren't actually intelligent. They don't *know* anything. They are just refined algorithms combined with a web browser to go a pull data points and compress it into what it thinks is the right answer. You can ask me a very specific, technical question about a topic I know literally nothing about, then after 15 minutes and reading a couple articles I found on google, I can regurgitate some meaningless response to you. That doesn't mean I actually fully grasp your issue, or that my advice is little more than an educated guess. I'm just throwing some jargon back at you from what I have seen based on some similar key words. "AI" is doing literally that exact same thing, just faster and with a few more data points.
 
I have used an AI called POE which I came to know through Quora. I have lost countless hours trying to get some hard to answer book. It is always a "YES" with countless positive answer, lots of references. Hopefully I have the books in PDF version to check. At one time I was looking for published information on RC Pitch Roof detailing and it gives me list after list of published books even pages, clauses etc and yet none of them contain any reference whatsoever to Pitch Roof. I believe the root problem is that AI does not have access to the book content but rely on what is written on the net about those books and people says everything all the time. Those are crucial, critical information and AI keep spreading false information over and over. I also thought that perhaps this is limited to free version of AI and perhaps the paid versions are way better. Perhaps there are engineers, architects that really use the program on a day to day basis.
 
I have used an AI called POE which I came to know through Quora. I have lost countless hours trying to get some hard to answer book. It is always a "YES" with countless positive answer, lots of references. Hopefully I have the books in PDF version to check. At one time I was looking for published information on RC Pitch Roof detailing and it gives me list after list of published books even pages, clauses etc and yet none of them contain any reference whatsoever to Pitch Roof. I believe the root problem is that AI does not have access to the book content but rely on what is written on the net about those books and people says everything all the time. Those are crucial, critical information and AI keep spreading false information over and over. I also thought that perhaps this is limited to free version of AI and perhaps the paid versions are way better. Perhaps there are engineers, architects that really use the program on a day to day basis.
Don't ask AI about stuff you are trying to figure out. Ask about things you know about. Work out how accurate it is and to what extent you should trust it.

AI does a good job of recognising sentence structure and jargon. It does not get the facts right, and I wonder about the comprehension.
 
do you mean AI doesn't know stuff, just aligns words ? As bad as it may be, I reckon it is still more knowledgeable that a good chunk of the human population.
 
I get the strong impression it mostly just strings commonly linked statements together and then rewrites them in consistent grammar and spelling, which in itself is quite a feat. It doesn't 'know' anything, or use logic. https://en.wikipedia.org/wiki/Chinese_room is a very badly explained version of one argument about AIs.

The way it does beam bending in Matlab is by repeatedly applying formulae, and then correcting them by adjustments in response to increasingly brusque corrections from the user. I did get it to simulate a bouncing ball correctly, that took 12 iterations.
 
Last edited:
but presumably having solved it once, it could solve a similar beam much more quickly. but could it solve a redundant beam any quicker ?
 
I can't even get it to solve the original problem, never mind a redundant beam.

Here's the problem, I won't bore you with the many iterations of a script that chatgpt created to solve it

1734759858416.png


Here's a link to the actual solved problem https://engineeringpaper.xyz/9r6KzVsGjtrKvaYWQj5dw2 and I've included a pdf of the entire thing
 

Attachments

  • Deflection-Simple beam w_overhang-non-uniform load.pdf
    131.1 KB · Views: 2
I have both "used" ChatGPT and more like "played" with it. The use has mostly been translations. For that that it is very good, but it usually requires that you know both languages. ChatGPT can be very helpful choosing the exact wording but you need to check the end result. But it does speed up the process.

I have also tried "discussing" things with it. If it is a subject that I don't really know the answers are usually convincing and seems correct. But in those cases I can't fact check, it just seems acceptable. In a situation where I have in-depth knowledge the conclusion is often the opposite. ChatGPT offers a lot of words but is often wrong. I have to ask repeated questions to steer it into the correct answers. Sometimes I have to tell it the answer to my own question and then it agrees.

I would consider it a very advanced web-browser. Google gives us a list of references, GhatGPT "reads" them and summarizes. In some cases it helps, in other cases you need the reference.
 
I would consider it a very advanced web-browser. Google gives us a list of references, GhatGPT "reads" them and summarizes. In some cases it helps, in other cases you need the reference.
When I read an interesting article on a web site, I check out the home page of the site, and I explore a bit to see what the site is all about. Lots of authors have interesting agendas. Does ChatGPT understand that nothing published on Infowars.com can be trusted?

I like Wikipedia as a source of information. It is not perfect, but there is a hard core of people trying to get the information correct. They provide references, and the talk pages can be very interesting. In university, you are not supposed to use any encyclopedia as a reference, but Wikipedia gives you all sorts of links to research.
 
You give them a base input, and they regurgitate a stream of text based on other examples that semi-sorta-kinda sound similar to what you are doing. But they don't actually know what you are trying to achieve, and even if they did, they wouldn't actually know how to solve it.

That is precisely what an LLM is supposed to do; it takes your input and tries to predit what is the most likely answer, based on what it's "learned", but not necessarily the most correct one. As such, there's no guarantee that it's either correct or isn't a hallucination. Few LLMs have actual calculation capabilities, so asking for a precise and correct math answer is fraught; moreover, even within a session, something like ChatGPT doesn't learn from its mistakes, even when you point them out. It acknowleges your correction, but will continue to make mistakes. Moreover, even simple things like the concept of "North" has no meaning to an LLM, it's just a word in a sea of words.

The previous iterations of AI, in the form of "expert" systems, had slightly more promise, since those were supposed to be "trained" by subject matter experts (SMEs), but the algorithmic implementations were overly complex and prone to "breaking". What Greg did above is essentially an SME training an LLM, but that approach is not readily repeatable for every problem and every subject, since you have to be the SME, expert enough to see the problems and guide the LLM to the correct solution.
 
Am I right in thinking the currently available LLMs don't actually incorporate what they have learned from user interaction back into their model? For example if ChatGPT told you the sun rose in the West, and you corrected it, and asked the same question in a different session, what would it say?
 
Am I right in thinking the currently available LLMs don't actually incorporate what they have learned from user interaction back into their model? For example if ChatGPT told you the sun rose in the West, and you corrected it, and asked the same question in a different session, what would it say?
That's my understanding; I think there are great risks in allowing unsupervised, uncontrolled, training by the general public, since that's obviously rife with possibilities of contamination and suborning of the AI. I can well imagine that if the AI could be trained through public interactions that someone would spawn an army to sway the training a particular way.
 
2ft of deflection on a 10ft span?? I have no issue with the direction of deflection, positive or negative means nothing, it's a simple sign convention, but the magnitude of deflection is tremendous, are you certain you have your unit conversions correct, or more specifically, are you certain ChatGPT has unit conversion and parsing correct?
 
deflection in "mills" (not ft). Moment curve looks off at the RH end (looks slightly positive when should be zero) but that may just be the plotting routine ?
Deflection +ve down (hogging is +ve) is a convention, not the common one sure but ... but shouldn't the overhang deflect down too, like the midspan ?
 
Yes, it doesn't go exactly to zero because the dx value was 1 inch, which is far too coarse unless you interpolate, which I didn't. You can see the lack of resolution in the SF diagram at the rh support.That's a missing area of about 1/2*247 lbf*1", about 10 lbf ft and the residual moment at x=14ft is -10.556 lbf ft

The rh end is relatively lightly loaded as can be seen from the SF diagram so the bulk of the tip deflection is due to the slope at the rh support.

I've sorted the sign conventions out. Overall I'd say it took longer to get ChatGPT to get it mostly right using FEA than if I'd done it myself, because I had to check every line and then remove a bunch of bad logic, but it did a reasonable job with the plots and I didn't need to look anything up. The biggest correction I had to make was adding the integration constant in to get 0 deflection at the RH support, a boundary condition ChatGPT blithely ignored. You can argue the toss about whether it is easier to debug a well written albeit inaccurate script vs writing it from scratch, I think the fact the the ChatGPT scripts run properly, ie without syntax errors due to missing brackets etc, from the word go is actually quite a big deal for a lousy typist like myself.

OTOH if I hadn't known about FEA as an approach then ChatGPT was leading me up the garden path using canned equations and then bodging them. So IF I know how to solve a problem then I'll get ChatGPT to rough out a script and then fine tune it myself - getting ChatGPT to fine tune things is very time consuming and ends up in repeated bad solutions.

1735258802108.png
 

Part and Inventory Search

Sponsor