The False Philosophy Plaguing AI



Original Source Here

The False Philosophy Plaguing AI

Erik J. Larson and The Myth of Artificial Intelligence

Source: Frank Chamaki via Unsplash.

The field of Artificial Intelligence (AI) is no stranger to prophesy. At the Ford Distinguished Lectures in 1960, the economist Herbert Simon declared that within 20 years machines would be capable of performing any task achievable by humans. In 1961, Claude Shannon — the founder of information theory — predicted that science fiction style robots would emerge within 15 years. The mathematician I.J. Good conceived of a runaway “intelligence explosion,” a process whereby smarter-than-human machines iteratively improve their own intelligence. Writing in 1965, Good predicted that the explosion would arrive before the end of the twentieth century. In 1993, Verner Vinge coined the beginning of this explosion “the singularity” and stated that it would arrive within 30 years. Ray Kurzweil later declared a law of history, The Law of Accelerating Returns, which predicts the singularity’s arrival by 2045. More recently, Elon Musk has claimed that superintelligence is less than five years away, and academics from Stephen Hawking to Nick Bostrom have warned us of the dangers of rogue AI.

The hype is not limited to a handful of public figures. Every few years there are surveys of researchers working in the AI field asking for their predictions of when we’ll achieve artificial general intelligence (AGI) — machines as general purpose and at least as intelligent as humans. Median estimates from these surveys give a 10% chance of AGI sometime in the 2020s, and a one-in-two chance of AGI between 2035 and 2050. Leading researchers in the field have also made startling predictions. The CEO of OpenAI writes that in the coming decades, computers “will do almost everything, including making new scientific discoveries that will expand our concept of ‘everything’,” and the co-founder of Google Deepmind that “Human level AI will be passed in the mid 2020’s.”

These predictions have consequences. Some have called the arrival of AGI an existential threat, wondering whether we should halt technological progress in order to avert catastrophe. Others are pouring millions in philanthropic funding towards averting AI disaster. One of the focus areas of The Open Philanthropy Project — a multi billion dollar foundation — is dedicated to risks from advanced AI and over US$100 million has been dedicated to this cause. The Machine Intelligence Research Institute has received millions in funding for “ensuring smarter-than-human artificial intelligence has a positive impact.”

The arguments for the imminent arrival of human level AI typically appeal to the progress we’ve seen to date in machine learning and assume that it will inevitably lead to superintelligence. In other words, make the current models bigger, give them more data, and voilà: AGI. Other arguments simply cite the aforementioned expert surveys as evidence in and of themselves. In his book The Precipice for instance, Toby Ord argues that AGI constitutes an existential threat to humanity (he gives it a 1 in 10 chance of destroying humanity in the next 100 years). Discussing how it will be created, he first cites the number of academic papers published on AI and AI conference attendance (both of which have skyrocketed in recent years), and then writes

[T]he expert community, on average, doesn’t think of AGI as an impossible dream, so much as something that is plausible within a decade and more likely than not within a century. So let’s take this as our starting point in assessing the risks, and consider what would transpire were AGI created. (pg. 142)

What makes these researchers so confident that current approaches to AI are on the right track? Or that problem solving in narrow domains is simply a difference in degree, not of kind, from truly general-purpose intelligence? Melanie Mitchell, a professor at the Santa Fe Institute, recently called the idea that making progress on narrow AI — well-defined tasks in structured environments, such as predicting tumours, or playing chess — advances us towards AGI the foremost fallacy in AI research. Quoting Hubert Dreyfus, she notes that this is akin to claiming that monkeys climbing trees is a first step towards landing on the moon. There are no arguments supporting this fallacy, only extrapolations of current trends. But there are arguments against it.

Enter Erik J. Larson, a machine learning engineer arguing against AI orthodoxy. In The Myth of Artificial Intelligence, Larson joins the small set of voices protesting that the field of AI is pursuing a path which cannot lead to generalized intelligence. He argues that the current approach is not only based on a fundamental misunderstanding of knowledge creation, but actively prohibits progress — both in AI and other disciplines.

Larson points out that current machine learning models are built on the principle of induction: inferring patterns from specific observations or, more generally, acquiring knowledge from experience. This partially explains the current focus on “big-data” — the more observations, the better the model. We feed an algorithm thousands of labelled pictures of cats, or have it play millions of games of chess, and it correlates which relationships among the input result in the best prediction accuracy. Some models are faster than others, or more sophisticated in their pattern recognition, but at bottom they’re all doing the same thing: statistical generalization from observations.

This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind. Humans develop general theories about the world, often about things of which we’ve had no direct experience. Whereas induction implies that you can only know what you observe, many of our best ideas don’t come from experience. Indeed, if they did, we could never solve novel problems, or create novel things. Instead, we explain the inside of stars, bacteria, and electric fields; we create computers, build cities, and change nature — feats of human creativity and explanation, not mere statistical correlation and prediction. Discussing Copernicus, Larson writes

Only by first ignoring all the data or reconceptualizing it could Copernicus reject the geocentric model and infer a radical new structure to the solar system. (And note that this raises a question: How would “big data” have helped? The data was all fit to the wrong model.)

In fact, most of science involves the search for theories which explain the observed by the unobserved. We explain apples falling with gravitational fields, mountains with continental drift, disease transmission with germs. Meanwhile, current AI systems are constrained by what they observe, entirely unable to theorize about the unknown.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: