Is AI risky for engineering work?

I have seen these aspects already. For instance, I give students a multiple-choice exam. A couple of students got in touch after the exam: they thought my answers were not correct because AI was giving them different answers (and, obviously, they believed it).

5 Likes

You should teach them how to verify their answers.

Evidently. My point is: they believed LLM even after seeing the instructor-supplied correct answer.

2 Likes

IMO, I think the problem might be intellectual laziness and not really about LLM’s. There might be a tendency to convince ourselves that someone that sounds/acts authoritative is likely correct because it saves us energy.

I use LLM’s a good deal as efficiency tools, but the amount of stupidity they spout on (by human standards) simple tasks, coupled with their trademark authoritative tone, makes me wonder how huge inaccuracies are getting included in production systems because we are too lazy to verify everything 100%.

I think there may be some similarities with blindly trusting answers from stack exchange; in the most extreme scenario even trusting everything we see on papers without trying to replicate it on at least a very basic level. Unfortunately I have had many instances where weeks of work were wasted because we just trusted results on a paper without trying to verify them before applying them (analytically or numerically).

However, the fact that many non-experts would rather believe the LLM results over demonstrable ones is disconcerting. I would say this is a matter of emotional trust above everything else. Something is making people feel more inclined to trust a result by an LLM. It might be the fallacy of “this is the super advanced AI company that has taken the world by storm, there’s no way their chatbot can be wrong”.

4 Likes