Good article on ethical aspects of AI

Pay attention to the emerging confluence of quantum computing and ML. “Models mere mortals cannot possibly understand” are coming…


Very good article! Like it, Thanks for sharing, still reading…

The tools of yesterday might have required malicious intent by those who wielded them to wreak havoc, but today’s tools require no such thing.

:point_up: :+1:

1 Like

Those models have been around for decades. A lot of reduced form or structural models that have become possible to estimate in the past few decades have 10³–10⁵ parameters, not something that is possible to “understand” in detail. We try to cope.

hard not to worry

Or people running a consultancy about these things would like everyone to think so. The word “nightmare” occurs 40 times in the article.

At least he is very upfront about it, cf:

businesses need to explicitly identify the risks posed by these new technologies as ethical risks or, better still, as potential ethical nightmares […] the likelihood of realizing ethical and reputational risks has massively increased […] business leaders are ultimately responsible

So, effectively: you need to hire consultants. Note that the author

has been a senior adviser to the Deloitte AI Institute, served on Ernst & Young’s AI Advisory Board

so this is not surprising.

I remain unconvinced about the nightmare scenario. But I am very certain that, eventually, some of my tax money will end up with companies who will kindly consult with governments about nightmares for an hourly rate I don’t even want to imagine.


Consultants or employees. It’s nothing new, we have data protection officers now, we have colleagues responsible for environmental impact, we have people who look after discriminations and so on.

All these positions had to be fought for.

The author is quite concrete and defines his term of nightmare, or he demands that a specific nightmare needs to be defined as concrete as possible, on the premise that we all typically agree on what a nightmare is. Here the author is much better than others on my opinion. I read this article as that the author is just not writing about the near doom, as others tend to do and where I am easily out of reading.

Just out…

1 Like

I don’t like high-price consultants either, but: what is the alternative? Pretend the problems don’t exist? That those things cannot happen? And, that if there is an issue, some of the engineers will simply come up with a “technology” solution?

1 Like

While there may be no true analogy to what is currently happening in the AI world, I’ve thought about it in terms of the invention of the automobile. It literally changed the world over time. But then the technology got better, cars went faster, crashed more, and killed/injured more people. Then came speed limits, drunk driving laws, seat belts, airbags, (skip ahead) rearview cameras, lane departure warnings, etc. It’s not always obvious what will go wrong with new technology, but eventually a solution will be found. So personally I think the nightmare scenarios are overblown, especially when self-driving cars can be defeated by small stickers. (and yes, I know that is an old article)


This is already happening in Europe for GDPR as most big companies (if not all) are not compliant. Then the business question is what is the level of risk that one faces.

1 Like

Careful monitoring, and assessment of risks and mitigating strategies based on cost/benefit analysis, not fearmongering, hyperbole and drama.

All that talking about the AI apocalypse will achieve is people becoming desensitized to the whole issue in a few years, and then it will be difficult to address actual problems.

I am not sure I see the connection. GDPR is a relatively clear-cut set of rules. If you follow them, you are safe, if you don’t, you will get fined. Yes, initially companies (and people) were overreacting because the practice was not clear, but that was years ago.

1 Like

That’s exactly what the article demands.

yes, but not only cost/benefit for the company, but for the society.

Well, admitted, there is some

in the article, but still, I think it’s quite reasonable.

The question for me is, at which stage it is a failure not to argument in drastic pictures? Should we still rely on cost/benefit analysis when it comes to climate? For the AI complex it may be the right time now for appropriate action, I can imagine that it maybe to late someday in the future.

1 Like

Could what looks like hyperbole and drama today turn out to be a sizeable underestimate tomorrow?
I remember when Facebook started, some people were wondering about societal impacts and were being laughed at. Today, we see increased mental illness, self-harm, and suicide rates edging up. Were those “fearmongers” of yesteryar right after all?

1 Like

You are effectively asking if something is a zero-probability event. No, of course we cannot guarantee it.

But from a decision-theoretic point of view, a society needs to allocate its resources. Focusing on events just because they “could” happen will inevitably divert resources from other things. A reasonable decisionmaker would rank these based on available evidence.

The standard move then is to increase the doom level, because even an improbable event is important if it could lead to something really, really bad. Hence the hyperbole. I understand the motivation, I just don’t find it convincing.

If you truly care about these, there are a lot of low-hanging fruits in the policy space one could implement. Most of these are “traditional” public health measures (eg for suicide prevention public health programs you can find summaries on the CDC website for the US, but countries in northern Europe are considered to have the most effective programs).


I do not think it is that clear cut once one factors in cloud computing, side projects, corporate opacity and other less than ethical and monetizable practices. I have a limited practical view on such matters but given human nature and my observations, ensuring throughout any organization that the GDPR rules are clear in the first place can be challenging as it can become difficult to have a consistent view on the data management side. Hence, the GDPR consultant business model.

Secondly, a dangerous effect of regulation is that regulators choose (on what basis exactly is unclear) whom to investigate (and potentially fine, or not). My implication is that regulation has the potential to stifle innovation and protect monopolies (even when fining them). Applicable in the AI case as well.

1 Like

This is the key point. I typically ignore AI articles that discuss “extinction”, “apocalypse”, or similar because I don’t trust authors that use that kind of sensationalist language. And the population at large will absolutely tune out these kinds of articles eventually, even if they make valid points and have specific examples.

A good comparison is discussions on climate change. How many “we’re all doomed” articles can one person really read on the subject before they just stop? A better tact would be to stop with “extinction” talk, tone down the rhetoric, and lay out a well-reasoned case. Otherwise it all just reads as click-bait that is best ignored.


I don’t know if this can be generalized. It sounds reasonable but perhaps it is wrong anyways.

The cold war and the threat of atomic warfare for example is not easy to be toned down. And if it would have been done before 1989, are we sure, that it would be better and would have prevented any escalation? Nobody can tell.

The difference here is that we all agree that an atomic strike is likely to bomb us, or what is left, back to the stone age.

And this what this article is about: those who brought us those LLM AIs should invest some time and deep thoughts (which can be boiled down to use money) to bring up possible “nightmare” scenarios as concrete as possible. Risk analysis can start with that. The author is talking exactly about what you want: away from general doom to concrete “nightmare”, where “nightmare” is “defined” as something we all can agree on, like above atomic scenario, not something which stays vague. And the author specifies who is in charge and who is not to do that. The author demands the well-reasoned cases, which he calls “nightmares” with a reason, from the managers of those companies now.

If this has been done and result is, we can’t find anything which is “nightmare” enough and likely enough, well, that’s even better. But we can’t say “well, those doom sayers are to be ignored because hyperbole”, without any decent analysis on possible impact. This happened a bit too often in the latest history.

Last, there seems to be some kind of gold rush on business with AI. Don’t know but it looks a like. Demanding to do a decent risk analysis on a field like that seems not to stifle innovation, really not. I can’t believe that. Those emerging technologies tend to produce billionaires and highest value tech corporations. Those lousy millions to be invested now into creating authors “nightmare” scenarios are peanuts.

All in all the article is not that hyperbole as you imply.

1 Like

OP article is indeed well-reasoned and largely avoids hyperbole, but the second is pretty much the opposite stating that there is a “risk of extinction” in 22-words. I get the idea to make a blunt statement and get right to the point, but it seems to again to take the wrong tact. For many, I would guess, “one bad apple spoils the bunch”, such that even reasoned arguments get lumped in with ones that verge on conspiracy.

To be quite literal, do the authors of that statement really think that AI could cause human extinction, i.e., every single human being on the planet will die because of AI? If so, I don’t buy it. If not and they just mean, “a lot of bad stuff could happen”, then don’t use the word “extinction” because it doesn’t help their argument, in my view anyway. And further, it just makes me ignore the rest of their argument because they’ve started with a false (in my view) hypothesis.

1 Like

Ah, yes, good point. Actually I ignored the second one completely (because of hyperbole? :wink: ) so I didn’t recognize that some statements may mean the second one. Oh well, it’s always so difficult. With my last answer I will stop defending the OP article just because I believe that we all are right in some way and view point. There is no right or wrong yet.

1 Like

Yeah, I totally agree that AI/ML can be used in (unintentionally) nefarious ways, such as the facial recognition software that was shown to be very biased. But balancing the seriousness of issues without sensationalizing them seems to be hard for some authors.

1 Like

An article on ethics that doesn’t once mention normative ethical theories (which are virtue ethics, consequentialism, and deontology). The author refers to “ethics” as one thing, with universal understanding and agreement.

Yet many political conflicts revolve around which ethical theory is applied. Consequentialists will argue, “We should do (or allow) X, as this will make the majority of people happier.” The Deontologists reply, “You have no right to do (or allow) X; in fact, you have a duty to prohibit it.”