The singularity claim

This idea traces back to Good's description of an ultraintelligent machine creating an 'intelligence explosion' and Vinge's coining of the term 'technological singularity'.

Speculations Concerning the First Ultraintelligent Machine

Good, I. J. (1965):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind.
Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
  • If it is not 'docile enough', the creation of the 'first ultraintelligent machine' is the singularity.

The Coming Technological Singularity

Vinge, V. (1993)

When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities - on a still-shorter time scale.
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control.
  • This is the singularity claim: Superintelligent AI is a realistic prospect and it would be out of human control.

Argument map

argument map for the singularity claim