Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Firm Recruiting Engineers to Train Generative AI Models 4

NOLAscience

Structural
Nov 20, 2017
224
I have a LinkedIn account but only log in about twice a year. In searching for info about another person, a link to the person's LinkedIn profile required me to log in. While I was there, I noticed the latest PM to me from someone who did not provide his last name, only the initial "A.". He was promoting an offer to apply for a gig providing human feedback to a generative AI model. Key phrases included "experienced ... Engineering expert", "providing human feedback", "lend your expertise", and "ranking domain-specific responses". Offer was for $30-$50 an hour, remote. For the record, I am a licensed PE in Louisiana.

I can't imagine any scenario in which helping a generative AI model to improve its output would be ethical. Do you agree? Thoughts?

Is there any way to stop this sort of dangerous model being used by the public, even if the AI model provides copious disclaimers stating, "Not to be used for..."? Certainly no permit office would accept an AI model as EOR, but what about smaller projects trying to build without permits?
 
Replies continue below

Recommended for you

What I'm concerned about is a system that is giving answers in a way that seems confident in their accuracy. My analysis programs don't do that. They just give an answer.

That's already OBE; last year, a lawyer got severely sanctioned for a legal brief they submitted that was completely written by ChatGPT. Not only was there serious errors, but ChatGPT even invented case law citations that were completely fictitious.

So, abuse of AI, even by supposedly experienced professionals, is completely within the realm of possibility; one can only hope that AIs won't start abusing humans in the same way.


TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
Tempting as it is, I'll refrain from posting the "Old man yells at cloud" meme.

Various levels of generative design have been common for 20+ years in everything from home to vehicle design. CAD packages can convert from 3d solids to dimensioned 2d prints, or in the case of architectural packages - 2d floor plans and snow/wind loads to 2d/3d framing prints/models and optimized materials lists. GUIs are much more friendly, today you can do a lot of design simply by imprecisely pulling on surfaces and AI will adapt surrounding parts/surfaces to standard dims. CAE can optimize endless design variants based on pass/fail and unrelated better/worse criteria. Its not uncommon for the industrial artist's tablet to be linked to parts standards/catalogs and CAE so their sketches convert into 3d solid parts that are not only standard sizes but also meet engineering requirements. In many niches, design today is heavily AI with engineering roles primarily being signoff, testing, and resolving the inevitable post-production issues.

The issue isnt whether creating and using AI tools is ethical, the issue is that many are unethically billing themselves as engineering professionals without using (nvm understanding) modern technology.
 
For the fee they are offering, I can imagine we will have less than stellar engineers training it - yay!
 
It is coming. I already get it to produce reports for me based on very rough drafts. You need to check it, of course, but it will get better and better, and do more and more, and need less and less oversight.

Like the military AI that is used to pick targets, the engineering AI will eventually become better at "guessing the answer" than those driving it.

 
Imagine if you had all the engineering designs and reports from some British consultancy around since the 1800s or a big American firm like AECOM or Jacobs or Fluor over their whole history, and you just set out to train an AI on explicitly that data.

Current models are not really trained on engineering specific data and they're still at least somewhat useful.

Just wait till there's a general-purpose civil engineering AI that knows which consulting firms are *ahem* 'Linked up' with which other consultants / contractors / suppliers and seem to *ahem* bend the laws of physics to make sure their preferred "friends" get the job.
 
IRstuff said:
It's no different than people going to doctors and telling them exactly what's wrong with them because they Googled their symptoms and found their disease. Should we have stopped the development of Google because of the misuse? If not Google, there would have been tons of others that could have filled the niche, Altavista, Webcrawler, Yahoo, Bing, to name a few.

This situation IS different from Googling symptoms. I cannot purchase prescription medicines or perform surgery to treat myself (though I can buy supplements to attempt to treat or choose to ignore the symptoms based on what Dr Google says).

And, "Google" in this sense is just a general term for all search engines.
 
That's perhaps not a great example because you can synthesise your own meds (good luck) and yes, people have done surgery on themselves and survived (good luck squared). Bear in mind that for all but the last 220 years of recorded history medicine and surgery were entirely experimental sciences, making your average shade tree mechanic look like Christian Barnard and Alexander Fleming rolled into one.




Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
I dont understand the concern.

The public can already purchase and use/misuse engineering tools.

Regulators are already exist as a safeguard to ensure minimum, safe quality.

Ethical engineering already requires that human-factors be eliminated via process and testing. Proven design is proven regardless of who/what designed it, AI or human. The real danger isn't AI, its the many "engineers" who dont test anything, dont complete FMEAs, design by blindly following "standards" as if they were gospel or how-to manuals, rely on garbage-in garbage-out analyses, or otherwise employ hubris. They cant prove that their design meets requirements nor which are the most likely failure modes. Until prison sentences become normal for that nonsense we have no room to criticize bc AI following standard process is far safer.
 
I can't imagine any scenario in which helping a generative AI model to improve its output would be ethical.

So you would shun any usage of generative AI in your own work, even it makes you more productive and appear smarter?

Do you use CAD in your work?

How is making CAD better any more ethical?



TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I don't see anything inherently unethical in verifying an AI model as long as its training data was provided with consent and the verifiers are qualified.

I think people are right to question the use of the AI model after the fact though. It is up to the end user to act ethically, but that's not always something you can rely on, so I suppose some barrier to entry/use is valid. If the product is a mass release that anyone anywhere can use, and is advertised as an "engineered solution" delivery tool, there are certainly going to be people taking undue confidence in its output and putting lives at risk. If its something that needs a costly license and some know-how to get running, then I fail to see how it's any different than a standard design program like Risa3D or COMPRESS. Somebody could just as easily pick one of those up and put together a faulty but "verified" design by not considering all loadings (and users should be validating the output of such programs regardless).

It's a similar dilemma in the design of firearms. Are the manufacturers/designers responsible for damages caused by and unethical actions perpetrated using their weapon designs? Unless the firearm is exploding in the hands of the operator or discharging on it's own, personally I think no, but that's another can of worms. But it goes to show why the engineering profession is regulated. It is up to the engineer to act professionally, hence the need for to verify that they are qualified and can indeed act ethically, and be disciplined should they fail to do so.

Who knows, if AI makes it too easy to make a proper looking design/calculation maybe everything will require an engineering stamp to ensure an engineer actually looked at it.
 

Part and Inventory Search

Sponsor