AI’s Destiny Was Never Intelligence



Original Source Here

Not all climbs are equally easy. Credit: Author via Midjourney/DALL·E

AI’s original purpose was to imbue machines with intelligence. It was doomed from the start. Not because of the sheer complexity of the quest, but because it was inevitable that we’d get distracted—and attracted—by a more enticing prize.

Writer Tiernan Ray has recently argued in an illuminating article on ZDNet that “the industrialization of AI is shifting the focus from intelligence to achievement.”

He writes:

“If AI increasingly gets stuff done, in biology, in physics, in business, in logistics, in marketing, and in warfare, and as society becomes comfortable with it, there may be fewer and fewer people who even care to ask, But is it intelligent?

Since 2012 (and long before), any system associated with the “AI” label matches Ray’s description of “industrial AI,” as he calls it.

From early deep learning (DL)-based computer vision systems like CNNs for object detection and recognition, to recommender systems that power YouTube and TikTok, to NLP architectures like the transformer—and the eventual appearance of GPT-like models—, to creativity-enhancing AI art models like Stable Diffusion, and even systems that are partially non-DL like AlphaZero or AlphaFold, all meet the criteria.

All can be seen as useful but not intelligent.

I fully agree with Ray that industrial AI dominates the field at the moment. Last week, I found myself writing that current AI systems “are perfectly fine intelligence-wise as they’re now.” I didn’t even question the implications of that sentence.

And everywhere I look I realize the sentiment is generalized: usefulness has put to sleep the quest for intelligence.

Where I disagree with Ray is that I don’t think this responds to a general shift in the field, meaning, that AI people have changed their interests from the profound to the practical.

I observe a different phenomenon: it’s not that most people no longer care about “true AI” because they changed their minds. It is that most never cared anyway.

AI wasn’t always popular.

When the field was young—fueled with many promises but little results—only those who really cared about the scientific and philosophical implications of building true AI (or AGI, or strong AI, or “human-level” AI) devoted their lives to it.

The McCarthys and Minskys of the world wanted it to be successful, but underestimated the difficulty of the challenge and only saw mild success with symbolic approaches and expert systems—nothing like what we have today.

Every winter, interest faded away accompanied by unmet expectations due to the rather limited viability of AI. But they went on with determination. Their mind was fixated on a far future—even if they didn’t know just how distant.

Their long-term ambitions remained intact.

It was precisely a group of people with a comparable will of steel who laid the groundwork of the deep learning revolution that led to the boom of industrial AI (Geoffrey Hinton, Yoshua Bengio, and Yann LeCun are often mentioned as the “godfathers” of DL). From 2012 to 2022 the world has witnessed, for the first time in 70 years, just how much potential was hiding within AI.

People like Hinton and Bengio (or McCarthy and Minsky), who were initially interested in building human-level AI, still are (would be). They’re more than happy that AI is succeeding but haven’t changed their long-term goals in the slightest.

However, as AI systems became truly useful—to enhance our abilities, perform hard tasks, and generate profit—people who wouldn’t have normally cared about pure scientific inquiry suddenly became interested. Now, this group vastly outnumbers the puritans.

Most don’t care about AI’s purported long-term goals, but about its immediate usefulness (perfectly legitimate—not a criticism, just an observation). They jumped on the AI bandwagon once the potential was tangible, and did so at this time specifically because ambitious-but-not-useful AI didn’t matter to them.

AI seems to be attractive insofar as it’s practical—which has deep implications.

People didn’t shift to building industrial AI after they got tired of failing to build intelligent machines (which I think is Ray’s point). Most never cared about the latter.

The sudden interest in practical AI stems from big (tech) companies’ tendency to appropriate everything that promises to be good business. Google, Microsoft, and Meta aren’t involved with AI to make a reality the dream of AI’s founding fathers. They saw a great opportunity once AI started to look promising and took it (again, legitimate).

Throughout the last decade, tech companies have attracted a lot of university researchers with succulent salaries. They’ve effectively extracted most of the talent from Academia, where the hunger for knowledge—and not profit—is the driver.

Right now, barely any newsworthy progress in AI is due to university research. I don’t claim tech companies have “tricked” people into working for them to guide AI toward practicality and away from profundity. But, borrowing a quote by professor Emily M. Bender, they’ve been sucking “the oxygen out of the room.”

That’s partially the cause of the industrial AI phenomenon. The other part is that people truly care more about usefulness than interestingness.

Both factors align with companies’ unmatched capacity (i.e. money) to “make things happen.” A profitable business is always a good reason to shift from long-term ambitions of knowledge to near-term immediate utility.

Not all tech companies match this description, though. OpenAI and DeepMind—two of the most outstanding AI companies—still care mainly about AGI. But they’re the exception that proves the rule: they’ve largely been money drains.

They are living proof of this reality: if the direction of progress in AI (both scientific and technological) is decided by money’s ability to make more money, then the questions we’ll even consider worth answering are limited to a very specific type.

“Are machines intelligent?” isn’t one of them.

What’s the problem here? If people prefer to work for Meta and earn a nice salary than spend their careers in a low-budget lab that won’t produce any relevant breakthroughs, that’s totally legitimate (I’d do it, no doubt—maybe not for Meta, though).

But there’s a possibility that the success of good-enough industrial AI will kill the goal of human-level AI. As I see it, the main threat when trying to achieve something is being content with an easier, less ambitious form of that goal (more on this in the last section).

I don’t think this will happen. Even if money and attention are somewhere else, many people still work on the quest for AGI—the higher-purpose highly-ambitious group will always exist. For instance, earlier this year Yann LeCun published a proposal to make human-level AI (with not necessarily new, but important ideas).

Even if killing the AGI goal is unlikely, useful AI may overshadow the endeavor. In the eyes of founders, investors, users, and outsiders, that’s the case for sure.

However, it’s mostly irrelevant: Sam Altman (OpenAI’s CEO) and Demis Hassabis (DeepMind’s CEO) aren’t changing their mind just because money flows in the opposite direction—at least while the retain the trust and funding of Microsoft and Alphabet, respectively.

There’s a more interesting third possibility: are utility-centered approaches hindering the appearance of AGI? Not in the sense of a distraction, but in making us believe useful AI is a necessary step in the path to human-level AI when it isn’t.

It’s in the answers to this question that we find most criticisms of AI’s current path. Not a few AI leaders argue that today’s efforts to advance AI are critically misaligned with the “ultimate goal” of the field—the pursuit of industrial AI is at odds with creating truly intelligent AI.

Yann LeCun told ZDNet that “the dominant work of deep learning today, if it simply pursues its present course, will not achieve what he refers to as "true" intelligence.” Gary Marcus, who has been a long-term critic of pure-DL approaches as a means to achieve AGI, agrees with LeCun here: “we don’t have machines that have a kind of high-enough level of understanding of the world, or comprehension of the world, to be able to deal with novelty.”

I also believe this last possibility best explains the current state of AI. Our focus on achievement and utility hinders the goal of human-level AI.

If we accept the disconnect between current AI approaches and AGI as a consequence of AI’s recent wins—and the inevitable popularity that entails—this could be a case of “dying of success.” I’m not sure the founding fathers would be hopeful.

The people who care about human-level AI may not just be a small minority but may no longer be guiding the field toward their beloved goal anymore.

From their perspective: we climbed higher than ever but chose to climb the wrong hill.

Of course, all this raises some questions Why does anyone care about human-level AI in the first place? Is it a desirable pursuit? Should we challenge this seemingly untouchable premise?

The people who exclusively care about AI as long as it’s useful can’t answer those questions because they won’t even ask them. They don’t care enough about the purely scientific and philosophical side of AI. They deviate from AI’s original goal not because they reject the premise but because they don’t care enough to even consider it.

They exploited AI’s traction as a marketing force and that’s how they gathered so much funding and interest. Marketing and publicity were critical factors in AI’s success so the only way to transform the collective perception of what AI is vs what it could be is also through marketing and public perception.

However, the people who consider AGI a desirable pursuit don’t care enough about the means they’d need to make others care and accept those premises.

LeCun and Marcus are very vocal on social networks but most others with similar opinions and equally respectable histories of contributions to AI (LeCun from the computer sciences and Marcus from the cognitive sciences) are too focused on working that they forget just how important it is to let the world know about it.

From where I stand (an outsider deeply interested in AI, watching from the fences) I see what looks like two different technoscientific fields with radically different goals and premises under one single name.

Maybe it’s time to redefine expectations and untangle differences.

Since it was born, AI promised true intelligence as its ultimate goal. With the ascent of industrial AI, the goal is achievement and usefulness rather than intelligence.

Depending on where we fall on the spectrum of answers to “what is AI for?” our responses to the question “is AI doing fine?” will vary.

The main issue with industrial AI overshadowing and even hindering AI’s original goal isn’t that we may lose sight of it but that so many people would believe AI is still going strong toward that goal (“AGI is near”) when it’s not true.

Disconnection between expectations and reality is AI’s long-time curse.

The insurmountable differences we find between the extremes of each group and the confusion this generates on public perception makes me think the best option we have going forward is to split AI and explicitly distinguish the technological from the scientific. The industrial from the academic. The practical from the profound. The useful from the intelligent.

Maybe that’s the only way to leverage AI’s transversal applicability while ensuring the “higher goal” doesn’t die out.

This is, again, a matter of public awareness. But, instead of having one group trying to convince people that their premise is the one worth following and the other alluring people with quasi-fake marketing, in this case, both sides have an incentive to make this happen.

Industrial AI people would ensure those who fund their projects don’t hold unreasonable expectations while retaining the traction of being ”something-AI”.

And those who pursue true AI would ensure that the buzzword recovers its original meaning (and we can stop calling it AGI/human-level AI/true AI and just say AI).

And people like you and me would have an easier time telling apart one from the other.

Such a shift wouldn’t be instantaneous (I’m talking about several years’ worth of gradual change). But, if we don’t do it soon, I foresee we’ll face one of two scenarios: either we successfully separate industrial AI from intelligence-seeking AI or one of them dies out (even if only temporarily).

AI winters were just that: a disconnection between our expectations and the reality that we managed to build. As long as these conflicting faces of AI remain entwined in the collective imagination, one of them will always feel too ambitious and money-draining while the other will remain in the land of unreasonable expectations—forever risking to spook investors and founders once they realize they’ve been tricked.

This section is a digression from the main thesis, but worth reading.

Maybe you got the impression that I’m heavily criticizing industrial/useful AI, but that’s not at all my intention. I think it couldn’t have been any other way.

The field of AI was destined to go this way. Before achieving AGI it was inevitable that we’d go through a phase of pure utility-driven AI. If we wanted to create human-level AI we’d have to design the modules, systems, and elemental parts first—and those are by definition useful.

The core moral of this essay is that although most humans are curious, only very few are obsessively curious. Most are satisfied with very useful over truly intelligent AI—and that’s what is keeping us here.

This isn’t synonymous with killing the goal of human-level AI, as I considered above, but it creates an insurmountable abyss that feeds from the limitations of the human condition: contentment with what we have is a major barrier to change.

That’s what makes me think it’s possible AGI won’t happen and was never meant to happen—because it’s an unreachable goal. If we climb a hill to a point where our satisfaction is a 9, why would we “unclimb” it only to climb another (unknown) just to get a 10?

That’s why the most reasonable solution is to separate AI into those who want to stay in the 9-hill and those who are willing to go find the 10-hill. But make it very clear who is who.

If investors (who have the money), founders (who are interested in building AI), and users (who would use it) think that what we have now is good enough (if it works, why ask why?) then it doesn’t matter if we could find the 10-hill and build AGI—we’ll never get there to try.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: