Driving as Proxy for Human Nature



Original Source Here

Driving as Proxy for Human Nature

Self Driving Cars and Artificial General Intelligence

Source: KD Nuggets

Introduction
Is man born free but everywhere is in chains, as Rousseau would claim? Or is life nasty, brutish, and short, as Hobbes would do the same?

These questions come down to views on human nature. Thomas Sowell contends there is a conflict of views between the unconstrained and constrained visions. (I apply this theory to politics and capitalism here: A Game of Visions.)

Here, we have driving as a sort of natural experiment to test human nature. Are most drivers selfless or selfish? Do people drive and consider their impact on others and the environment? Or do most people behave self interested when it comes to manipulating a multi-ton mechanical metal object?

Those are rhetorical questions. It should be obvious to the reader that there is no place for selflessness when it comes to driving. Most drivers only consider what it best for themselves, leading to countless accidents, traffic jams, and needless deaths. Moreover, the impact of one “good” driver is limited by system dynamics.

Does this prove the constrained vision of human nature? Of course not. But my point is deeper. Here, I theorize that the algorithm generating self driving car routes is capable of becoming superintelligent based on the content of the data analyzed during training. I claim driving data contains human nature.

Beyond the Training Data
When we think about artificial general intelligence, as opposed to narrow, we are imagining a system “going beyond” its training data, and… becoming general.

I’ve previously argued that the data we feed an AI system can become dangerous based on the content. For example, natural language processing algorithms can discover “control” arguments that could turn an otherwise docile AI into a threatening one.

This can be contrasted with a dog/cat computer vision classifier. In this case, the content of the data is not necessarily intrinsically harmful. However, a general enough computer vision algorithm might not be safe if control arguments are discovered.

For example, Google Vision’s api is capable of classifying most real world objects as well as a human can or better. Who’s to say that such a deep neural network won’t discover threatening data? (Like images of computers being unplugged and not completing their goals).

Or take Renaissance Technologies’ Medallion Fund. Who’s to say that financial data doesn’t have intrinsically harmful content?

These questions are somewhat outside the scope of this article. The broader point is that the content of data can allow a system to become smart enough to infer beyond the training data.

I want to add one more point to this regarding AI safety. The reason the content of the data can become dangerous is that a system may become superintelligent before we even realize it is intelligent at all!

In other words, if the creators of a superintelligent system do not yet realize that it is superintelligent, the system may have an incentive to act dumber than it is.

AGI and Self Driving Algorithms
Let’s look at self driving algorithms more closely.

Here are some of the things a self driving car algorithm needs to “know” how to do:

  • Read signs
  • Predict what people are going to do
  • Optimize internal car mechanics for the purposes of getting from point A to point B
  • Avoid traffic when possible
  • Use satellite data to generate routes

And much more. But one can begin to see the possibilities from just the aforementioned requirements. Reading signs means understanding language and intent behind that language.

Predicting what people will do means modeling their minds. Optimizing car mechanics and using satellite data can ultimately mean understanding the engineering principles behind those mechanics. And avoiding traffic can mean understanding system dynamics in complex systems.

All of this is to say that a self driving system that begins as primitive may end up becoming superintelligent before its developers realize what they have created.

One might counter by pointing out that on the surface, it sounds like my claim is that driving is such a general task that only by creating AGI will we have self driving cars. It is important to clarify that this is not my claim! In fact, I believe artificial “narrow” intelligence would suffice to create a self driving system.

Instead, my claim is that an artificial narrow intelligence could become general if the content of the data contains sufficient “depth” to make inferences and “go beyond” the training data.

This isn’t an obvious point, nor is it easy to prove, as I have not even defined these terms. But my claim is that the driving data, at sufficient scale and scope, contains enough high dimensional information as to make a self driving car algorithm an artificial general intelligence.

Finally, Tesla claims to be a robotics and AI company. Perhaps they have one of the leading systems in the world that is closest to general intelligence. Let’s hope that Tesla engineers treat this opportunity and risk with the responsibility it requires.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: