Are You Afraid? 3 Reasons Why AI Scares Us

https://miro.medium.com/max/1200/0*lVuNx2ASugWYyO0F

Original Source Here

The problem of awareness — Will we see it coming?

The problems of control and alignment expose the situation in which a superintelligence could end up being harmful to us. In both cases, we assume that a superintelligence already exists and, more importantly, that we’re aware of it. This raises a question: Is there a possibility that a superintelligence emerges without us knowing it? That’s the problem of awareness. It points to the essential question of whether we’re capable of foreseeing the appearance of a superintelligence.

From this perspective, we find two cases: In the first case, a superintelligence appears too fast for us to react in an intelligence explosion. In the second case, we’re unaware that it is even happening. That’s the problem of ignorance.

An intelligence explosion — From AGI to superintelligence

Either we arrive at a superintelligence slowly, step by step, following a controlled path carefully planned, or an intelligence explosion occurs as soon as we create a general artificial intelligence (AGI). In this second scenario — which Stephen Hawking depicted—an AGI would be able to improve itself recursively until it reaches the Singularity. In the words of futurist Ray Kurzweil,

“Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity — technological change so rapid and profound it represents a rupture in the fabric of human history.”

It’s reasonable to think that there’s a level of intelligence such that AI could be smart enough to improve itself. An AI — which is faster, more accurate, and has a better memory than we do — could reach that level without previous warning.

The reason is that narrow AI already performs way better than us at some basic functions. Once it acquires system 2 cognitive functions, its unmatched memory and processing capability will allow it to become a superintelligence faster than we imagine. If this scenario becomes true, we won’t have time to find a contingency plan.

The problem of ignorance — We may be too dumb

Whoever builds a general artificial intelligence first will rule the world. Or that’s at least how it feels to see big tech companies developing and deploying increasingly powerful machine learning systems year after year.

The trend of building large models is in its heyday due to the possibilities of self-supervised learning and the use of supercomputers. But we still can’t answer the question of why we’re doing it this way? The direction we’re following is clear, but how or when we’ll arrive at our destination is unknown. It’s as if we’re running towards a wall blindfolded. We’re convinced that, because deep learning systems are working wonders, this paradigm will lead us to our final stop eventually.

However, there’s an important issue here. What if the problem isn’t that we’re blindfolded but that we’re blind? What if our capabilities of understanding the reality around us are too limited to detect whether we’ve built a superintelligence or not? I’ve talked about this issue in a previous article. I claimed that our physical and cognitive limitations may prevent us from acknowledging the existence of a superintelligence. If we remain unable to develop tools to reliably perceive reality, we’ll remain unaware that a superintelligence is arising in the dark.

If we keep creating powerful models and we’re actually on the right path, we may reach our destination before we know it. And, if the superintelligence arising in the dark happens to be unfriendly, then we’ll be in trouble.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: