8 Comments
User's avatar
ArnoldF's avatar

non-cyber guy. the most resonant scenario I have read (perhaps it is one of Dakaras article?) is that AI will dramatically improve life for humanity. we are comfortable giving more and more authority to it. daunting medical issues get resolved all within short spans of each other, mysteries of the universe in mathematics are resolved and come quickly into focus, than the next year mankind becomes enslaved by the very technology we gave ourselves over to. this is a very dark projection. somewhat reminds me of Colossus: The Forbin Project. Colossus speaks: "you will learn to love me."

Expand full comment
Dakara's avatar

Yes, a possible outcome. Likely nothing but plot twists going forward.

Expand full comment
Hartmut Straub's avatar

All common risk assessment frameworks use some kind of measure of probability. In the NIST Guide for Conducting Risk Assessments (https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-30r1.pdf), this is called likelihood.

Your risk = unpredictability * capability equation is risk = likelihood * impact in NIST.

The unpredictability part in your equation is very unclear anyway. Is it a measure for how likely the risk is or is it the uncertainty/variance of the likelihood?

In mathematics, the risk function is the expected value of a loss function:

https://en.wikipedia.org/wiki/Loss_function#Expected_loss

Here we also have a probability measure (the NIST formula from above is a special case of a Bernoulli distribution with p as the probability of a loss/impact).

This relates to the Bayesian interpretation of probability:

https://en.wikipedia.org/wiki/Bayesian_probability

Quote: "Broadly speaking, there are two interpretations of Bayesian probability. […] For subjectivists, probability corresponds to a personal belief."

Expand full comment
Dakara's avatar

There is no risk assessment framework for p(doom) that is currently being applied utilizing any measurable data. Nobody is using any type of standardized process for the evaluation of this specific risk related to AI. Likelihood is not something currently derivable from any data. At this moment, it is nothing but a guess.

The 'unpredictability' is the degree of nondeterministic behavior from AI models. Which is at least conceptually measurable.

Expand full comment
Hartmut Straub's avatar

The likelihoods in NIST are estimations and only in the case of non-adversarial threat events, the likelihood of occurrence is estimated using historical evidence (see section 2.3.1 Risk Models).

Unpredictability is not most important factor for assessing the threat level. If a threat event is almost certain to happen (low unpredictability), it means that it is a high risk.

Expand full comment
Dakara's avatar

There is no historical evidence for AI Risk.

I'm not defining unpredictability as the likelihood of threat. Unpredictability is related to only the behavior of AI models. Are AI models deterministic or nondeterministic in their output.

If models have deterministic behavior, then we can predict that behavior. Which also means we can control that behavior. If we can control the behavior, we have a method to address risk for unwanted behaviors.

Unpredictability is the fundamental problem of AI.

Expand full comment
sean pan's avatar

AI is a disaster for humanity.

Expand full comment
John Reed's avatar

The antichrist will be an AI.

Expand full comment