Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Firm Recruiting Engineers to Train Generative AI Models 4

NOLAscience

Structural
Nov 20, 2017
223
0
0
US
I have a LinkedIn account but only log in about twice a year. In searching for info about another person, a link to the person's LinkedIn profile required me to log in. While I was there, I noticed the latest PM to me from someone who did not provide his last name, only the initial "A.". He was promoting an offer to apply for a gig providing human feedback to a generative AI model. Key phrases included "experienced ... Engineering expert", "providing human feedback", "lend your expertise", and "ranking domain-specific responses". Offer was for $30-$50 an hour, remote. For the record, I am a licensed PE in Louisiana.

I can't imagine any scenario in which helping a generative AI model to improve its output would be ethical. Do you agree? Thoughts?

Is there any way to stop this sort of dangerous model being used by the public, even if the AI model provides copious disclaimers stating, "Not to be used for..."? Certainly no permit office would accept an AI model as EOR, but what about smaller projects trying to build without permits?
 
Replies continue below

Recommended for you

I wouldn't do it. That being said, most engineering fields are over-ripe for the type of disruption that tech has done to other industries like taxis and hotels.
 
geotechguy1 said:
most engineering fields are over-ripe for the type of disruption that tech has done to other industries like taxis and hotels

I don't see how it is possible. Everything I design is bespoke -- a one-off design.

And I didn't say I was considering the gig. What I am asking is whether it is ethical to assist AI in modeling the practice of engineering.
 
Is anything unethical until the AI results/calculations are used without upfront disclaimer of the use of a non-human analysis and design? AI is going to touch any and all disciplines regardless of how bespoke. Major firms and industries are using FEA modeling and analysis as the front line technique rather than edge-case analysis. Certainly, many engineers will pass on working with training AI models/algorithms but there are many others who may need money and will work on the training even though in the long run it lead to loss of opportunities. The AI Pandora's Box has been opened. . .
 
What I am asking is whether it is ethical to assist AI in modeling the practice of engineering.

Not even clear there's any AI that can do meaningful design calculations on their own; generative AIs still can't figure out how many fingers to draw for a human hand half the time.

I don't see an ethical issue; is it any different than helping someone with a design textbook? People can misuse/abuse knowledge from books all the time; we get lots of posters that think they've developed perpetual motion, just because they read some article about motors and weights.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
@NOLAscience: Perhaps you need to cut back on the caffeine. You read a random solicitation from an anonymous source on the internet, applied your own opinions and biases, took several unwarranted leaps of logic and concluded that something must be done to stop this!

Asked: Is there any way to stop this sort of dangerous model being used by the public, even if the AI model provides copious disclaimers stating, "Not to be used for..."?

Answered: Certainly no permit office would accept an AI model as EOR, but what about smaller projects trying to build without permits?

Existing regulations already define (or they should) what needs to be provided to the AHJ for review and approval.

Existing regulations already address(or they should)unpermitted construction.
 
I agree with NOLAscience that this there is a discussion to be had here about the ethics of "AI" in engineering, but more specifically in this case there's a discussion to be had around the ethics of an engineer using his/her expertise to train this AI model. I disagree with MintJulep's characterization of the OP - I don't get a panicked tone bent on stopping this at all. A disagreement with it, sure, but it doesn't warrant a bolt exclamation that was never made.

AI, as it stands now, is not much more than a glorified (and, in my experience, largely inaccurate) search engine. But that will quickly change. New models, new methods, new ideas will be coming faster and faster and it will evolve - not in the Skynet sense of evolving, but through the determined advance of technology. In that it is no different than most critical technologies that have developed in the last 100 or so years.

The thing that will likely differentiate this from the other examples given - such as a text book or FEA - will be barriers to entry. A book must be read and, on at least some level, understood. It takes a commitment of time to find an example, swap out the numbers, and recreate it. But then, what do you do with it? You have some chicken scratch on a sheet of paper.

I don't think FEA really fits the argument that NOLAscience is trying to make. That software is quite expensive and I wouldn't expect many people to be dropping $3k+ on even a simple 3D modeling program much less the 10s of thousands some of the advanced FEA packages cost to then learn a program that many professionals can't even figure out to try to do their own engineering. But...right now, there are free programs available online to design wood beams, columns, and other framing members. Anyone can access it, make a free account, and start designing. But even with this, you have to have some clue about what you're doing. The software has some 'guardrails' to help ensure people aren't being overly stupid - asks a bunch of questions to set up the project and then automatically applies typical loads to members. You can remove them, of course, but at least they're there. This creates a professional output that can be submitted to an AHJ. And many will accept it.

I have no real concerns about LLMs as they exist now. They can't figure out math problems - they struggle with the language they are supposed to imitate. But I can see where somebody could put together a software package like the one I'm talking about, and then apply a LLM interface trained by engineers to understand how to interact with the program. So then all that time needing to learn how to use the program is gone. "BeamAI, design a beam that's 6' long to hold up my attic." BOOM. It spits out a calc sheet that may actually be correct for the most generic of situations and that some AHJs may actually accept. It doesn't know that you have a water heater sitting above it or that there's a post supporting your ridge beam that lands in the middle of it.

So I think the biggest ethical question is this: what will this company do with the model the generate? As geotechguy1 says, our industry is ripe for disruption of some kind. I'd love it if I could feed an architectural drawing along with some basic instruction into an AI based system that could then spit out a Revit model or AutoCAD drawing of an initial structural layout. That has the potential to save even a small company 10's if not 100's of thousands of dollars per year. So if the focus of this company is to create a productivity tool for engineering firms to use, then I could get on board. Not for $50/hr, but I could get behind the ethics. I could also understand an educational angle. A sort of tutor for engineering students. But if this is meant to be a sort of open engineering-for-all sort of platform that can be freely used by anyone with no regard to the consequences, then I would tend to agree that there is an ethical concern.

Regarding the comments about permitted construction - keep in mind that in many (if not most) places in the US, permits are not required for houses or agricultural structures. On the coasts and in large cities, yes - there is generally a strong AHJ that controls permits and will punish people for building without a permit. But even so plenty of people still do it. A big factor in whether or not somebody is going to do something like that is a perception of the consequences. If I need to take out a bearing wall and put in a beam, but I know nothing about how to pick one out, I'm likely to think twice and maybe go get somebody who knows what they're doing. But if I can ask AI, which tells me with such confidence that a single 2x8 is sure to do the job, then maybe I'll feel okay doing it. And don't dismiss this out of hand - if lawyers can stake their reputation and license on AI by not checking the output and presenting an AI hallucination as actual case law with quotes from fake judges in a real court, your average DIYer could certainly find themselves way over their head.

As engineers, I don't think we have a duty to protect people from themselves. But I do think we have a duty to ensure our knowledge and expertise is used in a responsible fashion. If I know there's a good chance my work is going to be used in a way that could enable a dangerous or hazardous condition, I have a responsibility to either not get involved, or get involved in a way that prevents that outcome.

 
phamENG said:
As engineers, I don't think we have a duty to protect people from themselves. But I do think we have a duty to ensure our knowledge and expertise is used in a responsible fashion. If I know there's a good chance my work is going to be used in a way that could enable a dangerous or hazardous condition, I have a responsibility to either not get involved, or get involved in a way that prevents that outcome.

Are the AI[ ]people willing to take the responsibility that a professional engineer does?

If you are a professional engineer, are you willing to stamp and seal work done by an AI[ ]program?

--
JHG
 
drawoh said:
Are the AI people willing to take the responsibility that a professional engineer does?

No.

drawoh said:
If you are a professional engineer, are you willing to stamp and seal work done by an AI program?

I am a PE, and the answer is: it depends. Are the results auditable and verifiable with a known underlying methodology? If so, then it's not much different than applying my seal to a design where I used any number of engineering software packages (so long as I'm in the "driver's seat" from the inception of the prompt to the verification of the output). If it's a black box with unpredictable variability and unreliable results, then no.

 
I think there's some over-reacting here, AI is a tool, just like your calculators or your FEA software; as such, the EoR is still wholly and fully responsible for any design/calculation, so you'll need to be as confident in the AI design to the same degree you're confident in your normal analytical tool output if you don't want to risk design failure.

That level of confidence should be a high threshold to meet, if you're a diligent engineer.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
IRstuff - the concern isn't for tools to be used by engineers, it's the creation of tools to be used by non-engineers that give them a false sense of confidence.

RenHen - if that question is directed at me, then the answer is no. I'm not aware of anything public that claims to be "AI for structural engineering."
 
it's the creation of tools to be used by non-engineers that give them a false sense of confidence.

I get you, but that's hardly a new phenomenon; I was a junior engineer 40 years ago when we got new simulation tools that made me think I could engineer certain circuits. LOL!!

It's no different than people going to doctors and telling them exactly what's wrong with them because they Googled their symptoms and found their disease. Should we have stopped the development of Google because of the misuse? If not Google, there would have been tons of others that could have filled the niche, Altavista, Webcrawler, Yahoo, Bing, to name a few.

If the benefits of AI are, on the whole, beneficial, then there's nothing to be done except to continually warn people that using AI does not make them engineers. Bad enough there are bad engineers that think they can engineer things and "inventors" that have invented some wonderful thing, but didn't actually engineer the product and remain blissfully unaware of how stupid their idea was.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I think the most immediate danger is that of reports, where AI can write grammatically correct and apparently logical summaries, which are incorrect. Because they are well written there is a tendency for the reviewer to fail to notice they are being led up the garden path. Here's Mozilla predictably sticking the boot into Google's AI summaries
"Are the AI people willing to take the responsibility that a professional engineer does?"

The software you trust has an EULA. I suggest as a PE you read it. The software I use to model potentially lethal events says basically "we've written this stuff and are not liable in any way shape or form for the results you get, even if the error is due to an error in the code or the assumptions behind the physics of the program", but in a more lawyerly fashion. Because this stuff can end up in court, we have a rather long series of tests and calculations demonstrating that a particular model is behaving sufficiently like the real world to be useful, and have a suite of standard tests to make sure the new version of the software gives the same answers as the old one (or at least we think we understand why they are different).



Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
IRstuff said:
but that's hardly a new phenomenon

This feels somewhat shortsighted. While aspects are certainly familiar, this is starting to break new ground. There will be new issues or new takes on issues that haven't been previously considered or encountered.

I never said AI development should be stopped. That's foolish. I'm simply participating in an argument over the ethics of a professional engineer training a program that has a high probability of being used by non-engineers. I think it hinges on what exactly is being developed. If it's an educational tool, that's fine. If it's a program for generating real world designs, I'd want to know what sort of safeguards there will be. If the stated goal is to somehow "democratize engineering", then I'll give it a hard pass, because that's not something that I think can be safely done with AI.
 
phamENG said:
I never said AI development should be stopped. That's foolish. I'm simply participating in an argument over the ethics of a professional engineer training a program that has a high probability of being used by non-engineers. I think it hinges on what exactly is being developed. If it's an educational tool, that's fine. If it's a program for generating real world designs, I'd want to know what sort of safeguards there will be. If the stated goal is to somehow "democratize engineering", then I'll give it a hard pass, because that's not something that I think can be safely done with AI.

What is the difference between this AI, and the FEA that is attached to 3D[ ]CAD software? This is not quite the same problem, as I have not seen discussions about the ethics of writing analysis software.

There is an issue of who is allowed to use the resulting software. I have worked in the past in a fairly unprofessional environment where people seemed to regard the FEA as some sort of magic box that knows the answers to our questions. You people with advanced degrees don't get into these discussions, or at least you can close them off easily. I am a technologist with a three year diploma.

--
JHG
 
I would say the difference, ostensibly, is ease of access and use. Put something even as "simple" as Risa3D in front of somebody with no experience and it'll be a month before they can do anything with it. Compare that to bring able to give a plain language prompt to an AI program that can then give you a black box answer.

Garbage in garbage out will always be a problem. What I'm concerned about is a system that is giving answers in a way that seems confident in their accuracy. My analysis programs don't do that. They just give an answer. The AI systems out there now can lull a user into thinking they are talking to a real person. I'm no psychologist, but that seems to impact the way people interact with them. At least from my casual observations.

An AI based tool to help automate tasks would be great. But I'm not sold on letting AI do any real engineering.
 
There is a massive difference between FEA, which is solving closed formed solutions based on user input, and a generative AI model such as LLM, which is not even going to return the same answer twice, and doesn't have a closed solution but instead runs through a neural network to guess at what the answer is.
 
Back
Top