Existential risk from AI and orthogonality: Can we have it both ways?

Müller, V. C., Cannon, M. (2022)

[The reconstruction shows that] the argument for existential risk from AI has two premises, namely the singularity claim and the orthogonality thesis.
  • Taken together, the singularity claim and orthogonality thesis may amount to an existential threat from AI.
We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’.
Our concern is with the validity of the argument: It appears that for each of the two premises to be charitably interpreted as true, they must be interpreted as using the term ‘intelligence’ in different ways. If that is the case, they cannot be combined as premises into a valid argument for existential risk from AI.

Argument map

argument map for 'both ways' claim

The authors explore a bit further ...

Is there a notion of intelligence that we can use in both premises and with which we can interpret both premises as true?

If the AI is capable of realising what is relevant, why would the realisations of the AI stop before it realises the relevance of reflecting on goals?

... which we should come back to later.