Existential risk from AI

The Singularity Isn’t Near

Allen, P. G., Greaves, M. (2021)

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point.
If the singularity is going to occur on anything like Kurzweil’s timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.
While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming.
But history tells us that the process of original scientific discovery just doesn’t behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to re-evaluate portions of what they thought they had settled.
We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways.
The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we can’t create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake.
These kinds of fundamental shifts don’t support the overall Moore’s Law-style acceleration needed to get to the singularity on Kurzweil’s schedule.

Argument map

the complexity brake argument