I agree with NOLAscience that this there is a discussion to be had here about the ethics of "AI" in engineering, but more specifically in this case there's a discussion to be had around the ethics of an engineer using his/her expertise to train this AI model. I disagree with MintJulep's characterization of the OP - I don't get a panicked tone bent on stopping this at all. A disagreement with it, sure, but it doesn't warrant a bolt exclamation that was never made.
AI, as it stands now, is not much more than a glorified (and, in my experience, largely inaccurate) search engine. But that will quickly change. New models, new methods, new ideas will be coming faster and faster and it will evolve - not in the Skynet sense of evolving, but through the determined advance of technology. In that it is no different than most critical technologies that have developed in the last 100 or so years.
The thing that will likely differentiate this from the other examples given - such as a text book or FEA - will be barriers to entry. A book must be read and, on at least some level, understood. It takes a commitment of time to find an example, swap out the numbers, and recreate it. But then, what do you do with it? You have some chicken scratch on a sheet of paper.
I don't think FEA really fits the argument that NOLAscience is trying to make. That software is quite expensive and I wouldn't expect many people to be dropping $3k+ on even a simple 3D modeling program much less the 10s of thousands some of the advanced FEA packages cost to then learn a program that many professionals can't even figure out to try to do their own engineering. But...right now, there are free programs available online to design wood beams, columns, and other framing members. Anyone can access it, make a free account, and start designing. But even with this, you have to have some clue about what you're doing. The software has some 'guardrails' to help ensure people aren't being overly stupid - asks a bunch of questions to set up the project and then automatically applies typical loads to members. You can remove them, of course, but at least they're there. This creates a professional output that can be submitted to an AHJ. And many will accept it.
I have no real concerns about LLMs as they exist now. They can't figure out math problems - they struggle with the language they are supposed to imitate. But I can see where somebody could put together a software package like the one I'm talking about, and then apply a LLM interface trained by engineers to understand how to interact with the program. So then all that time needing to learn how to use the program is gone. "BeamAI, design a beam that's 6' long to hold up my attic." BOOM. It spits out a calc sheet that may actually be correct for the most generic of situations and that some AHJs may actually accept. It doesn't know that you have a water heater sitting above it or that there's a post supporting your ridge beam that lands in the middle of it.
So I think the biggest ethical question is this: what will this company do with the model the generate? As geotechguy1 says, our industry is ripe for disruption of some kind. I'd love it if I could feed an architectural drawing along with some basic instruction into an AI based system that could then spit out a Revit model or AutoCAD drawing of an initial structural layout. That has the potential to save even a small company 10's if not 100's of thousands of dollars per year. So if the focus of this company is to create a productivity tool for engineering firms to use, then I could get on board. Not for $50/hr, but I could get behind the ethics. I could also understand an educational angle. A sort of tutor for engineering students. But if this is meant to be a sort of open engineering-for-all sort of platform that can be freely used by anyone with no regard to the consequences, then I would tend to agree that there is an ethical concern.
Regarding the comments about permitted construction - keep in mind that in many (if not most) places in the US, permits are not required for houses or agricultural structures. On the coasts and in large cities, yes - there is generally a strong AHJ that controls permits and will punish people for building without a permit. But even so plenty of people still do it. A big factor in whether or not somebody is going to do something like that is a perception of the consequences. If I need to take out a bearing wall and put in a beam, but I know nothing about how to pick one out, I'm likely to think twice and maybe go get somebody who knows what they're doing. But if I can ask AI, which tells me with such confidence that a single 2x8 is sure to do the job, then maybe I'll feel okay doing it. And don't dismiss this out of hand - if lawyers can stake their reputation and license on AI by not checking the output and presenting an AI hallucination as actual case law with quotes from fake judges in a real court, your average DIYer could certainly find themselves way over their head.
As engineers, I don't think we have a duty to protect people from themselves. But I do think we have a duty to ensure our knowledge and expertise is used in a responsible fashion. If I know there's a good chance my work is going to be used in a way that could enable a dangerous or hazardous condition, I have a responsibility to either not get involved, or get involved in a way that prevents that outcome.