Research study on the legal liability of autonomous robotics



Original Source Here

Marco Verch Flickr

Research study on the legal liability of autonomous robotics

I found really interesting a study from 2020, titled “Legal liability for Autonomous Robotics”, made by Dr. Safaa Fatouh Gomaa, Member of the Faculty of Law of the Egyptian Mansoura University, a study related to legal issues regarding liability related to Artificial Intelligence products, but more specifically, in the production of autonomous robotics.

According to the European resolutions of 2017 and 2018, according to Gomaa, the liability rules cover cases where the cause of the robot’s actions or missteps can be attributed to a specific human agent such as the manufacturer, the machinist, the holder or the manager, and where this representative could have foreseen and circumvented the robot’s dangerous conduct. He also adds that since digital technologies are constantly evolving, due to, patches, updates and software extensions, influencing the behaviour of all mechanisms of the system, it is crucial to identify responsibilities among the different actors in the AI supply chain.

Given the complexity of the topic to be covered, the researcher has divided the paper into three sections; section 1 is the historical, international and legal framework for Robots; section 2 is about identifying the legal responsibility for autonomous industrial robotics; finally, section 3 gives an overview of his conclusions.

History

Robot concepts began as legends. Many early myths included non-natural persons, such as the automated handmaids built by the Greek god Hephaestus. In ancient Egypt, religious statuettes, made of rock, iron or wood, were made, which were dynamic and played a vital role in religious ceremonies. In the New Kingdom of Egypt, from the 16th century BC until the 11th century BC, the ancient Egyptians routinely referred to these statues for advice, to which they responded with a movement of the head.

In ancient China, the implications of humanoid automata were discussed in Liezi, a compilation of Taoist texts, which became a classic. In Chapter 5, King Mu of Zhou is touring the West and after asking the craftsman Master Yan ‘What can you do?’, an artificial man was presented to the royal court. The mechanization was indistinguishable from a human and performed various tricks for the king. However, when the artificial man apparently began to joke with the women present, the craftsman cut the automation to pieces and revealed the inner workings of the artificial man.

“In ancient European times, Albertus Magnus supposedly built an entire android, which could perform some domestic tasks. But it was debunked by Albert’s pupil, Thomas Aquinas, because it disturbed his thinking. The best known myth concerned a bronze head invented by Roger Bacon, which was ruined or destroyed after losing its operational momentum. Automata similar to humans or animals were common in the fantastic worlds of primitive literature”.

“One of the last Alexandrian engineers, Hero of Alexandria (10–70 BC) created a fictional theatre of automata, where the figurines and stage moved by mechanical means.

The Byzantines inherited the knowledge of automata from the Alexandrians and further industrialised it to build water clocks with gear mechanisms. The knowledge of how to make automata passed to the Arabs. Harun al-Rashid built water clocks with complex hydraulic jacks and moving human figures.”

And again: ‘At the end of the 13th century, Robert II, Count of Artois, created a garden in his castle of Hesdin that combined a number of robots, humanoids and animals. Automatic bell ringers, called JACQUEMART, became popular in Europe in the 14th century along with mechanical clocks. Among the main supportable automations is a humanoid designed by Leonardo da Vinci (1452–1519) around 1495. Leonardo’s notebooks, revived in the 1950s, include complete drawings of a mechanical knight with a shield, who had the talent to sit, wave his arms, and move his head and jaw.”

“The last attempt at automation was Wolfgang von Kempelen’s The Turk, an educated machine that could play chess against a human being. When the machine was brought to the new world, it prompted Edgar Allan Poe to write an essay, in which he concluded that it was impossible for mechanical devices to reason or think.”

“Japan’s most legendary robotic automaton was offered to the public in 1927. The Gakutensoku was invented to play a diplomatic role. Set in motion by compressed air, it could write smoothly and raise its eyelids. Many robots were built before the dawn of computer-controlled servomechanisms, for the public relations purposes of large corporations.”

Many other historical episodes are recounted in this fine work, right up to our own recent times, where, in 2019, engineers at the University of Pennsylvania have modelled millions of nano-robots in a matter of weeks using technology borrowed from semiconductors. These microscopic robots, small enough to be injected, for medical purposes, into the human body and organised wirelessly, could one day provide medicine and perform surgical treatment!

The absence of previous regulatory frameworks in this respect has created a real need to resort to legislation that could cover the phenomenon by analogy: one of these conventions concerned product liability, Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products. Or Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety, regulations that treat “Robots” as a product that must have safety and liability for defects in the production process. Other conventions treat Robots as radio, electromagnetic equipment, such as Directive 2014/53/EU of the European Parliament on the harmonisation of the laws of radio equipment, and Directive 2014/30/EU of the European Parliament on the harmonisation of the laws of the Member States relating to electromagnetic compatibility.

Automation

The degree of automated mission, referred to as levels of automation (LOA) is detailed by Thomas B. Sheridan and WL Verplank who developed the most comprehensive list. There are ten levels of automation ranging from complete human control to complete computer control:

1. The first level of automation: The human operator sets the mission and turns it over to the computer to implement it.

2. The second level of automation: The computer helps by defining the options.

3. The third level of automation: The computer helps by defining and proposing options. The human operator can choose to follow the recommendation.

4. The fourth level of automation: The computer chooses the action and the human operator selects if he should or should not do.

5. The fifth level of automation: The computer chooses the action and performs it if the human operator agrees with the action.

6. The sixth level of automation: The computer decides the act and notifies the human operator if the operator wishes to withdraw the action.

7. The seventh level of automation: The computer fixes the action and notifies the human operator what has been done.

8. The eighth level of automation: The computer fixes the action and communicates to the human operator only if the human operator requests it.

9. The ninth level of automation: The computer fixes the action when communicated and communicates to the human operator only if the computer anticipates that the operator should be informed.

10. The tenth level of automation: The computer fixes the action if it chooses that it must be done. The computer communicates to the human operator only if it decides that the operator must be informed.

In legal liability for autonomous systems, it is considered as a basis to establish “a causal link between the harmful behaviour of the robot and the damage suffered by the injured party” in order to be able to claim compensation from a company. This is designed to stop companies from shifting liability onto the autonomous systems themselves. So, for example, the manufacturers of a self-driving car cannot claim that they are not liable if it crashes just because it was driving itself at the time. “However, the agents of autonomous robots become so unpredictable that they break the causal link. So some scientists suggest creating a legal system in which the responsibility of the robot is balanced with autonomy, in another sense; the more self-directed a system is, the more responsibility it assumes. However, this only raises more questions, such as how autonomy is measured in the first place; if a self-driving car is taught bad driving habits by its owner and crashes, is it still the manufacturer’s fault? As an answer to this question, some scholars recommend creating a compulsory insurance system for autonomous robots. If a robot, or the software that controls it, does damage, the responsible party would pay into this system. If an accident occurs, the injured party would receive compensation from the fund. This way there is less motivation for companies to try to escape liability.”

Thus, a summary of the legal aspects, in many liability regimes, considers robots as products, with responsibility shared between the manufacturer, the designer or the user. To which is added, as we already know, the distinction between two categories of programmers: the designer or programmer in the proper sense, whose mission is to design the robot for use by the customer, and the user programmer, who is an operator who programs the robot with the limitations set by the manufacturer.

The customisation of the robot is a cause of shifting the responsibility of risk from the user, but this shift does not mean an unconditional release of the manufacturer’s responsibility; the manufacturer’s responsibility, in customising the robot, should be the assumption that the user is not an expert in robotics and only uses the machines generated by others. On the contrary, the manufacturer’s assumption of risk is limited to the risk that the user can know. Thus the obligation to provide information about the conditions and manner of use by the user plays a key role in the liability of the manufacturer, or user. Thus, the producer’s or user’s liability exists when there is unreasonable conduct in assuming risks, i.e. ‘negligence’, understood as a dereliction of the duty of care, causing damage or injury to another person.

According to Directive 2011/38/EU of the European Parliament on consumer rights, the consumer who has obtained the robot is covered by consumer protection law if the robot harms him and there is no fault or negligence in his conduct. Negligence shapes liability regimes. Thus, injuries caused by robots due to design, assembly or production errors are usually the responsibility of the manufacturer, but automated failures can occur by not maintaining the robot in healthy environments, for example due to a negligent condition, and this is the responsibility of the owner or user.

Here the researcher identifies some key elements that he believes best model the liability regime and these are:

1. Environmental conditions; the environmental condition, i.e. the certain condition in which a robot exists, affects the way liability is distributed. The more sophisticated it is, the more attention it requires from the producer and the more attention it requires from the owner.

2. Black Box

3. Measuring device; it is relevant for assessing hazards, through the sensitivity of sensors for example.

4. Machine-driven configuration; it is the structure of the robot, safety is not only required from the different parts of the robot, but also from the whole package. Hazardous design, such as sharp edges, can influence the assessment of risks and injuries.

5. Educational skills; the ability to obtain data and process information to achieve one’s missions.

6. Levels of automation.

7. Human intervention.

The TDRL (TEMPLATE FOR DISTRIBUTION “S ROBOTS LIABILITY) works with determined and separate accidental conditions, measuring the level of association of each element in the accident. The accidental situation is not an inaccessible fact, but is combined with different factors, the consequence of which is the ultimate accidental event. Each of these factual stages must be investigated and measured independently. This examination for each fact allows us to distinguish between tangible “hardware” failure and intangible “software” failure and human fault.

Conclusion

According to this rather thorough study, the intersection of several key elements can help establish the responsibility of one or more parties involved in the programming or design of the automated system. In case these elements are not sufficient, this system of liability distribution helps to understand how to investigate those circumstances that are defined as accidental, but that are ultimately the result and product of a series of causal links that only eventually cause an eventual damage, but that involve careful investigation of ‘failures’ and/or defects in the hardware system, rather than software or rather human negligence!

All Rights Reserved

Raffaella Aghemo, Lawyer

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: