Here’s Why We May Need to Rethink Artificial Neural Networks

https://miro.medium.com/max/1200/0*MRDRzX2pm0uY3nDM

Original Source Here

Some questions arise from these results: Why hasn’t the AI community tried to remodel its foundations to adapt better to the reality they’re trying to simulate? Is AI destined to fail in its quest to achieve AGI until those foundations are overthrown and rebuilt from the ground up? What would be the consequences of changing AI at such an elemental level?

Let’s see how all this unfolds.

AI & neuroscience — Diverging pathways

The neurons in our brain — although not all — are way more complex than their artificial counterparts. It’d be reasonable to approach this issue by checking if the assumptions established by AI and deep learning still hold despite the recent — and not so recent — discoveries in neuroscience.

It may be the case that AI could still work perfectly fine without changing anything. It’d carry on with its path towards AGI despite the apparent differences between digital and biological neural structures. However, it seems almost no one in AI cares enough to even check it.

The reason is that, from the very early days, neuroscience and AI parted ways — although both fields are trying to answer tightly related questions. Neuroscience is concerned with intelligence, the brain, and the mind. Neuroscientists decided to look inwards, to the only instance of intelligence we know of; us. In contrast, AI is concerned with replicating intelligence using artificial means. They care about designing and building intelligent agents that can perceive the world and act on it accordingly.

Neuroscience is a pure science which purpose is to find truth. It’s driven by curiosity and a hunger for knowledge. AI — at least short-term AI — is largely driven by money and usefulness. People in the industry aren’t concerned that the very basis of all deep learning could crumble into pieces if we carefully analyzed it. They care that AI keeps attracting funding and their models seem to somehow work, even if unreasonably.

Neuroscience keeps reviewing time and again its foundations but artificial intelligence has chosen another way: They made the assumptions and went forward without looking back once.

The levels at which both fields are working and developing aren’t the same, but it’s not fair to say that everyone in AI sees it from a technological, money-driven lens. There are people working very hard to advance the field as a science. Those who still see the field as a means to solve intelligence and fulfill the original mission of AI’s founding fathers: Artificial general intelligence.

They acknowledge the distinction between useful AI that works fine for simple, narrow tasks and it’s being deployed everywhere, and the challenging AI that needs important breakthroughs to get to the next level. In the latter case, there’s an ongoing debate about what’s the best path to follow. Whereas some argue deep learning is the way — it may need some tweaks but it’ll work eventually — , others think it won’t ever be enough by itself.

But is that what they should be debating about?

Is the AI community focused on the wrong problems?

This debate should be happening if and only if all the lower-level debates are closed and agreed upon. Yet, nothing further from the truth. The lowest possible cornerstone under which deep learning is believed to be the path to AI’s future remains in doubt: Artificial neurons may be too dissimilar from biological neurons to ever give rise to complex cognitive processes and human-like intelligence.

We could compensate for the lack of complexity in artificial neurons with larger models, tons of computing power, and gigantic datasets, but that’s too inefficient to be the eventual last step of this quest.

Yet, those are the priorities of the AI industry. How can they make chips that don’t lose bandwidth while keeping efficiency? Either they stack GPUs or make/buy specialized chips (only within reach for the richest ones). How can they extract and curate larger and larger sets of data? Unsupervised learning and auto labeling. How can they create and train larger models? Either they’re a big-tech company or will need to ask one for funding.

They keep finding solutions, but is this trend sustainable? It doesn’t seem like it. We may need to go back to the basics. Not only because we won’t be able to build AGI like this, but because we’re starting to feel the collateral burden of denying the inefficiency of today’s AI.

But here’s the catch; if they find they really need to make a change, the whole field of AI as we know it would need a complete restoration. And they’re simply not willing to accept that. AI industry leaders may even know AI’s bottlenecks are unpassable. But they may simply prefer to act as if it doesn’t matter so they don’t have to face the cost of having built all this on top of the wrong assumptions.

There’s an important clarification to make here, though. Some AI systems work well and don’t contaminate that much. AI is still an incredibly useful technological discipline that’s bringing lots of innovation across many industries. I’m not denying it. But that’s the exception to the rule. There’s an ongoing race to create more powerful AIs and every major player is there, fighting to get a portion of the pie.

As I’ve argued before, progress shouldn’t come at any cost.

ANNs should be more neuroscience-based for two reasons, one that looks at the future and one that looks at the present: First, the difference in complexity between biological and artificial neurons will result in differences in outcome — AGI won’t come without a reform —, and second, the inefficiency with which we’re pursuing this goal is damaging our society and the planet.

Is it worth it?

The consequences — For AI and the world

Even if the AI community doesn’t act regarding the facts I’ve outlined here, AI, as the fertile industry it is, will still keep bringing a whole lot of new research projects and useful applications each year.

Narrow AI systems will still succeed at the simple tasks they’re made for despite AI not coming closer to neuroscience. Artificial neural networks will still be popular whether or not the AI community accepts that biological neurons are way more complex than artificial ones. The AI industry will still benefit greatly from pursuing the quest of AGI whether or not it’s achieved eventually — near-AGI AI can also be world-changing, for better or worse. And the desire to keep raising the standard of living for the privileged people of the developed world will remain, too.

But at what costs?

Ethical concerns in AI are at their heyday, and the models don’t seem to be getting better. Just a few days ago the New York Times reported that a Facebook AI system had labeled a group of Black men as primates. Another AI made by Google showed the same harmful bias in 2015. Are we going to ignore all this and put a band-aid to the problem, as Google did by removing gorillas from the training dataset?

Making AI explainable, interpretable, and accountable is the key to solving these issues. Those are hot areas within AI but they aim at solving the problem a posteriori. Instead of But how could we do it if there are no robust theoretical underpinnings behind ANNs? There aren’t any neural models that could explain the behavior of neural nets. We use them to predict and forecast because they work, but we don’t know why.

With half the planet burning up and the other half drowning in unexpected floods, the climate catastrophe is around the corner. And AI isn’t helping. Its overall carbon footprint is untenable.

In 2019, researchers from the University of Massachusetts Amherst studied the environmental impact of large language models (LLMs) — increasingly popular nowadays with GPT-3 as spearhead — and found that training one of these big models generates around 300,000 kg of CO2 emissions; the same as 125 New York-Beijing round-trip flights, says Payal Dhar for Nature. Some big tech companies (Google, Facebook) are now working to reduce this issue and gradually shift to renewable energies.

Related to this issue is that ANNs are extremely inefficient. To learn the simplest tasks they need immense amounts of computing power. That’s the reason why these systems generate such a large carbon footprint. Inefficiency leads to higher exploitation of resources, which generates more pollution.

Human brains contaminate just a fraction of that and don’t consume nearly the same amounts of energy to learn or do the same things. The brain is an extremely efficient organ. How can we do such complex things when the brain uses so little energy — not to mention that is way slower than computers? Could the reason for this extreme difference be that the complexity of sub-neuronal structures is manifold higher than that of ANNs? Yes, it could be.

LLMs, reserved for the biggest players, are the ones that attract the attention of investors and the mass media. The reason is these models always come surrounded by an overhype that’s transmitted to the public: “Robots Can Now Read Better Than Humans,” “GPT-3 […] is freakishly good at sounding human,” “a robot wrote this entire article. Are you scared yet, human?

Overpromising and underdelivering is AI’s industry trademark. But not all the AI community is participating in selling something they don’t have with the sole purpose of generating publicity and attracting money.

In the words of Emily M. Bender, professor of computational linguistics at the University of Washington: “LLMs & associated overpromises suck the oxygen out of the room for all other kinds of research.” There’s crucial research being done besides LLMs that’s being neglected by funding institutions and the media.

“[B]ecause these big claims are out there, and because the LLMs succeed in bulldozing the benchmarks by manipulating form, other more carefully scoped work, likely grounded in very specific application contexts that doesn’t make wild overclaims is much harder to publish.”

— Emily M. Bender

Maybe some of that research that’s getting lost in oblivion is trying to alert those with eyes only for the shiny LLMs that we’re doing AI all wrong. Maybe some people are working to no avail on these exact problems I’m describing.

If ANNs are ill-based, only LLMs seem to matter, and only big tech companies can build and deploy them, then there’s a very real risk that the AI industry is effectively an oligopoly focused on the wrong goals. And there would be no one capable of raising their voice enough so those in charge hear how mistaken they are.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: