When Smart Cars make Bad Choices



Original Source Here

When Smart Cars make Bad Choices

Exploring unintended consequences with Autonomous Driving Software

Its 2030 and a SUV driven by an Autonomous Driving System (ADS) is heading west on a highway. The SUV contains two parents in the front seats and two small children in the back seat. The SUV is going the speed limit of 100 km/hour. The SUV drives through a tight corner and as the SUV makes the final turn a large bull moose weighing over six hundred kilograms shambles onto the road.

illustrations by Leah Hodge, Comox B.C

The autonomous driving system driving the SUV was trained to select the best alternative out of as set of possible outcomes and so the SUV abruptly swerves into the left lane currently occupied by a small sedan going the same speed as the SUV.

The SUV ADS had determined that saving the lives of two adults and two children was the greater good even though there was a significant risk that the small sedan would be forced into oncoming traffic travelling East putting the two adult occupants at mortal risk.

This didn’t happen because the ADS in the small sedan had determined that the SUV ADS was likely to come to the above decision and so the small sedan ADS sped up to avoid the collision. At this point it seems that a horrific crash has been averted except that the vehicles in the oncoming eastbound lane are also being driven by ADS’s and they are making their own calculations.

A few milliseconds after the SUV started to move the ADS in the truck in the left lane of the Eastbound highway has determined that there is a probability of more than fifty percent that the westbound sedan may be forced from the left lane of the westbound highway and into the eastbound left lane.

The pickup truck ADS makes a determination of what to do based on the probability of the head on collision and the total human lives at risk: the three adults currently in the pickup truck and the two adults in the small sedan.

The pickup truck ADS as well as every other ADS currently involved in the incident is aware of the passengers in the other vehicles in the immediate vicinity. The decision to share information such as the sex and age of the passengers was based on the assumption that the ADS will be able to make a better decision.

Based on the risk of five certain deaths the pickup truck violently swerves into the right lane despite the fact that the right lane is currently occupied with a van containing a young mother and a three-year old child.

The Van ADS hasn’t reacted to the events unfolding because it is more than three steps removed from the original trigger. ADS’s are limited to what events to react to prevent a massive chain reaction.

The pickup truck impacts the side of the van and the van is forced off the road. The van flips over and the mother in the front passenger seat is mortally injured.

There are no other casualties as the small sedan in the westbound lane was able to speed up in time and the SUV was able to move unimpeded into the left lane avoiding the moose. Unlike driving accidents with only human drivers the outcome above was predetermined due to the intention built right into the software that is the core of every Autonomous Driving System.

This essay discusses the moral, ethical and societal issues with Autonomous Driving Systems programmed to determine a course of action based solely on intention.

What do I mean by intention? Software, whether its software written based on a rule-set or based on machine learning, is intentional if it determines the same outcomes for the same inputs.

In the example above each ADS determines its course of action by weighing the balance of probabilities and the calculated measure of harms to select the best course. This is true regardless of whether the course of action was calculated by following some sort of decision tree or was calculated based on a trained neural network.

Let’s consider how the ADS in the SUV discussed above made its decision to move into the left lane as opposed to staying in the right lane or aggressively braking etc. Each one of these courses of action is considered and measured against the other courses of action using an objective measure that we are going to call a consequence score.

In actual ADS’s the consequence score is likely calculated from thousands of inputs, but we are going to simplify things and say that a consequence score is calculated from three inputs:

1. The probability of injury if this course is taken,

2. The number of persons injured

3. The severity of the injury based on a weighting, and finally,

4. A weighting on the value to society for each participant based on criteria such as the ages of the participants.

The fourth weighting is the most controversial and you might argue that all persons are equal in a modern, liberal society and the weight applied to an eighty-year-old should be identical to the weighting applied to an eight-year-old.

As it turns out the fourth weighting is not actually as important to the rest of this discussion as we could just as easily debate whether two persons should be weighted the same as three persons. The argument that numbers matter the most is the main argument of Utilitarianism, a Moral Philosophy based on the maximum good (or least impact) on the greatest (or least) numbers.

Lets look at the original accident discussed above. The SUV moved into the left lane because the consequence score was lower for that course of action rather than the other contemplated courses of action. The consequence score calculated was solely weighted on the number of persons who may be injured and not at all on the age of the persons. In other words the probability of two parents and two children injured in the SUV vs two persons injured in the sedan.

Lets change some of the aspects of the accident in the discussion above. The van instead of carrying only a mother and child is now carrying a mother and four children all under the age of ten. The right-hand lane of the van borders on a cliff. The pick-up truck ADS calculated the consequence scores based on five persons dying on a head on collision if the pick-up stayed in the left lane based on a lower possibility of injury of moving to the right lane and forcing the van into the ditch. With our changed storyline the consequence scores will be the same if based solely on the number of persons mortally injured.

The course of action in the case of a tie could be determined by flipping a coin. Sometimes the Van is forced off the cliff and sometimes the pick-up truck is in a head-on collision with the sedan.

Even if we weight the value of each person the same regardless of age, age will still be used to determine the severity of injuries. The severity of an injury is based on the forces and on the health condition of the person. A violent collision that throws a frail, ninety-year old against the restraining seatbelt straps will cause far more injury than if the victim was a typical twenty year old.

In other words even if we treat all persons as having the same weight as far as value to society age will still be used determine the cause of action with sometimes surprising results. The consequence score be higher for the same course of action if the person who will be injured is older as the severity of the injury greater for the very young and the very old.

That could mean that the ADS will determine that it is preferable to avoid what is otherwise the possibility of a moderate injury to a senior and select the course of action which results in serious injury to one or more teenagers.

At this point you might be thinking that each ADS should simply prioritize the passengers in the vehicle the ADS is driving but I rather doubt that will happen. Autonomous Driving Systems are already being regulated and the decision tree discussed here will be regulated as well for two obvious reasons:

The first is that the software developers, data analysts, company CEOs are not going to want to shoulder the burden of determining who will have the best and worst outcomes in accidents like the one described above.

The second reason is that society at large will want decisions that affect life or death regulated for the greater good and not for what is good for the automobile manufacturer or car owner.

So far we have discussed writing software that is intentional. The ADS will use a predetermined set of algorithms, rule-sets and weights to determine a course of action. Its intentional because given the same inputs the ADS will always determine the same course of action.

So what could possibly be wrong with that? Well actually everything.

The second part of this essay discusses the downside of Autonomous Driving Systems selecting a course of action based solely on intention.

For discussion purposes I will assume that society will dictate that age will be used in weighting the decision making for the simple reason that historically we always have. It’s not a coincidence that in the event of a disaster women and children were prioritized first. Alas, at least for women, we are unlikely to regulate a gender bias in the spirit of equality, but I am sure that most adults will continue to value the lives of children over their own.

It turns out that the argument made in this essay is still valid regardless of what is used for weighting. I’ll return to this point at the end.

What was responsible for the mini-van crash? The chain of events started in the opposite lane and yet the consequences were felt by a party which was not originally in harms way.

Autonomous driving systems or AI Driving software is not a general intelligence package that can reason about outcomes. ADS are programed either directly with a rules-set based on algorithms or indirectly using various machine learning techniques or most likely with a combination of both.

Regardless of how the software is developed for an ADS the software still represents a set of intentional decisions on how to select the best outcome. In the example above its not a stretch to say that the programmers and government regulators (assuming that ADS will be regulated) are responsible for the ultimate accident described above. We are making humans involved in the development of ADS’s ultimately responsible for other human lives.

What will be the basis of the decision making? Will we weigh the lives of the young over the lives of the elderly? Is the life of a doctor worth more than a barista?

Who is more worthy?

This argument will be challenged by those that are developing ADS’s with a significant portion of the decision-making being performed with machine Learning algorithms instead of explicitly stating the decision tree . Machine learning, guided or otherwise is based on the input of millions of examples that when processed will generate a decision tree.

Since Machine Learning and its close cousins are making decisions that are nearly impossible to back-chain then surely this gets the original developers off the hook. Except that this argument really makes little sense. The intention of how a decision is made is still captured in the appropriate selection of training and confirmation data that is essential to the development of any machine learning solution.

The right or wrong selection of data has already been shown to produce bias in the decision making. Google’s visual identification software made a serious error of identifying darker skinned persons as Gorillas. This error was not random; it was based on the data used to train the decision networks.

Not being able to understand how software is making its decision at the “atomic” level it not an excuse either. I develop software primarily in the Java programming language which is compiled into bytecode and then threaded compiled into machine code at the time of execution. I have no idea what the machine code looks like but that doesn’t mean I don’t know what my software is intended to do.

So if we don’t want an ADS to make a strictly intentional decision then what is the alternative?

How are the decisions made by an ADS different than the decisions made by human drivers? If all the vehicles were being driven by humans in the accident described above, then the outcome will be determined partly by chance regardless of the attentiveness of the human driver. In other words, in ten similar accidents, we would not expect the same outcomes every time.

The most likely scenario is that the SUV driver will first hit the brakes and then veer right off the highway. That may have not been the best outcome for society based on our discussion above but it’s the most likely outcome. But it’s not the only outcome. Sometimes the driver will brake and then try and move into the left lane and sometimes the driver will succeed, sometimes the driver will hit the moose and sometimes the SUV will force the sedan off the left lane.

In the accident scenario above there will always be an element of fate that will determine the outcome. The driver may sneeze or turn to speak to the person beside them or momentarily lose attention just when the moose decides to move onto the road.

Fate will play an infinitesimal small role in ADS decision making baring a mechanical or electronic glitch and that will lead to only intended consequences.

In my opinion we are in very real danger of stratifying society by weighing the worth of every member without an element of fate to balance things out. We are removing the moral hazard from the decision making of those whose lives are prioritized first.

A set of intentional rules will result in the unintentional creation of a worthiness score. A family with two young children is scored higher than two middle-aged couples. Knowing that you have a very high worthiness score may alter your behaviour is ways unintended by the original authors.

We are also placing a very real burden on society which will be ultimately be responsible for the decisions made by ADS’s. There is a reason why most liberal democracies have moved away from the Death penalty and it’s not because we suddenly care more for the criminal its because it diminishes us by forcing us to pre-determine the fate of another.

In contrast accidents caused by humans are a combination of intent and randomness. In the example above its not obvious what would have been the outcome if all the vehicles were being driven by humans. The SUV may have hit the moose. The sedan may have sped up; The moose have turned back.

Regardless of what happens the discussion will not be about the moral calculation made by the drivers but about an unfortunate accident that involved a large ungulate. I believe that its essential that we use this approach in the development of Autonomous Driving Solutions so that outcomes are not predetermined.

This is not the same as treating all outcomes as the same. At a high level we may develop a table that weights the age and the probability of serious injury or death as one of the determinants of the decision and no doubt that this will be reflected in the statistics at the aggregate level.

So how would this work at an event level? It would mean that an elderly couple choosing to go for a drive is not automatically de-prioritized by the ADS’s so that in the event of an incident the elderly couple will always be sacrificed for the greater good.

Adding a degree of randomness to the decision making helps to ground our individual decision making. A young family may choose not to drive across the mountains knowing that in the event of an incident their lives will not be automatically prioritized first.

Adding an element of randomness to the decision making has the added knock-on effect of sometimes improving the outcomes for all. For example, in the accident described above that if the truck ADS decided not to change lanes then no accident of any kind would have occurred.

Adding in an element of randomness makes it easier for all of us to accept that it was an accident and not an intentional act.

We differentiate between intentional acts and accidental acts all the time. For example, a bus that crashes due to a front tire blow out that kills twenty tourists is an unforeseen tragedy. A terrorist attack that targets a bus and kills twenty tourists is far more upsetting. Why? The same number of causalities and yet we treat an intentional attack different from a tragic accident.

The most serious problem with ADS being programmed to intentionally weigh the outcomes and to select the best is that there is no agreement on how we would make that determination. Is a Noble prizing winning scientist worth more than a teenager?

There is also the very real possibility that ADS will be hacked on behalf of the owners to skew the outcomes.

So what is the alternative? The alternative is that we remove some of the intentionality by introducing an element of randomness to the decision making. We can still provide weighting in the decision making to achieve what society agrees are the right trade-offs in aggregate but at the individual level the outcome is not a foregone conclusion.

How would this work? Lets assume that a course of action is determined by the probability of severe injury or death, the age of the participants and finally a random element that will occasionally skew the outcome.

Given multiple courses of action with different participants the ADS will usually select the course of action that results in the least amount of harm weighted by age (or potentially some other criteria) but not always. There will always be an element of fate that will skew the course of action so that it’s not a foregone conclusion. Sometimes Grandpa George and Great Aunt Dot are spared and the twenty somethings end up flying off the cliff.

With an element of fate then we reintroduce hope. We accept that the elderly are more likely to suffer from a stroke or a heart attack but we wouldn’t accept that the elderly are intentionally targeted for disease.

I started part two of this discussion by saying that weighting by age doesn’t matter. Any weighting that doesn’t allow for a degree of randomness in the outcome will suffer from the same issues. It doesn’t matter whether we use age or number of occupants in the car it will still result in a worthiness score.

Dan 2.0, Optional feature

In the end the only way to make an Autonomous Driving System fairer is actually to make it more human.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: