Unpopular Opinion: We’ll Abandon Machine Learning as Main AI Paradigm

https://miro.medium.com/max/1200/0*Qf0xiQh03mUxEfz8

Original Source Here

2. We’re approaching AGI by taking shots in the dark

But let’s give ML and DL the benefit of the doubt and assume we could build AGI by continuing this path.

We’ve been building bigger models, trained with more data in bigger computers since interest in DL skyrocketed in 2012. This idea of “bigger is better” has found important successes across the sub-fields of natural language processing and computer vision, among others. As long as we’re able to develop larger models, this approach will probably keep giving us better results. The hope is that sometime in the future, one of those models gets so intelligent that it reaches the status of AGI — we aren’t even close now.

GPT-3 is a good example of this attitude. It became the larger neural network ever created at the whooping mark of 175 billion parameters — it was 100x bigger than its predecessor, GPT-2. It showed off, performing at the top level in many different language tasks and even tackling problems previously reserved to us, like writing poetry or music, transforming English to code, or pondering about the meaning of life.

GPT-3 was so much more powerful than other models that we soon found ourselves unable to assess its limitations: The authors hadn’t thought about many of the use cases people found. People kept trying to find its weaknesses, once and again crashing into the wall of their own limitations. GPT-3’s power was outside the limits of our measurement tools. Whatever its level of intelligence, what we were measuring was below that.

Another example is Wu Dao 2.0, which was released a month ago — now holding the record of largest neural network ever created, which it will lose in no time. This monster of 1.75 trillion parameters was 10x bigger than GPT-3. We couldn’t adequately measure GPT-3’s level of intelligence — although is generally accepted it isn’t AGI-level — and we keep building yet larger and larger models.

Why are we approaching AGI this way? Where is this leading us?

We’re taking shots in the dark. The financial benefit is a tempting objective to go after, and that’s exactly what most of the companies and institutions behind these models are fighting for. What would happen if we keep building larger models which intelligence we can’t assess?

By making the assumption I described at the beginning of this section, we conclude we’ll eventually build an AGI-level system using current techniques. We’ll be looking for it and we’ll find it. But we won’t know it because the tools that we use to define our reality will be telling us another story. And, because we’re walking forward with the lights off, we won’t even pause for a second and doubt whether it has happened.

How dangerous is this scenario? We’re trying to build the most powerful entity ever. We’re doing it in the dark. We’re doing it mindlessly. We’re doing it because of money and power. If, in the end, ML and DL could create AGI, we better find a way to avoid this scenario. We should shift both our mentality and the paradigms — to others more interpretable and more responsible.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: