I am a turtle, not a tortoise: Mirroring artificial intelligence


Original Source Here

I am a turtle, not a tortoise: Mirroring artificial intelligence

Photo by Andrea De Santis on Unsplash

Industry 4.0 is the first industrial revolution led by a technology thought capable of human consciousness and existential challenge. We are in the stage of predicting lofty accomplishments for artificial intelligence and deep fears for Earth’s society and structure.

Beyond the trope of employee communication, no one seems to be thinking about how this all is going to work in company life. While some predict growth in AI-related investment to $100 billion a year in 2024, over 50 percent of companies are delaying and even canceling AI initiatives. Why?

One answer begins with a story of origins in which the emphasis is on machines that act rather than machines that think.

I am a turtle not a tortoise (said the Master)

British neurologist W. Grey Walter began work in 1948 on a pair of robotic creatures named Elmer and Elsie individually and tortoises collectively. They moved around seeking light, avoiding obstacles, and dancing together.

The tortoises are early experiments in cybernetics. The aim is to model things we do without necessarily being able to articulate how we do them. Walter’s tortoises share goal-oriented behavior. They are attracted to light. If the light is too bright, they are repelled by it. They modify decision-making behavior when batteries run low.

The tortoises have no mental representation of the world. The tortoise brain consists of two vacuum tubes and a few relays.

Elsie. Or is it Elmer? I get confused.

Elmer and Elsie nevertheless surprised. Each tortoise had a pilot light that turned on when the tortoise was scanning the environment. The light shut off as soon as the tortoise found a light source. A tortoise would be attracted to its own light in a mirror but locking onto the reflection extinguished it. In front of the mirror, the tortoise began “flickering, twittering and jigging like a clumsy Narcissus.”

Similar behavior of a living creature engaged with its own reflection is equated with a form of self-consciousness.

Valentino Braitenberg claims simple stimulus-response reactions can evoke the appearance of complex behavior driven by emotions such as fear or aggression. He was speaking about autonomous cars, decades after a second generation of tortoises suggested conflicting stimuli could induce an “experimental neurosis.” Walter wrote:

The “instinctive” attraction to a light is abolished [by the mirror] and the model can no longer approach its source of nourishment. This state seems remarkably similar to the neurotic behavior produced in human beings by exposure to conflicting influences or inconsistent education.

Walter proposed three therapies: deprive the machine of stimuli for a while, switch it off and on, and disconnect some circuits. Psychiatrists also resort to such strategies — sleep, shock, and surgery. His models indicate, insofar as the power to learn implies the danger of breakdown, direct attack may arrest “the accumulation of self-sustaining antagonism and raze out the written troubles of the brain.” Direct attack of and on AI is a subject for futurists these days.

Fifty years later a Sony engineer filmed a robot moving to find objects. Encountering an unexpected obstacle, the robot’s cognitive processes became chaotic, much like Walter’s tortoises. Researchers argued the bot’s “self-consciousness” was born in this moment of incoherence; it turned attention inward to resolve apparent conflict with its programming. The bot is not cogitating in our accepted parlance, but it exhibits structurally similar behavior, says Jun Tan, author of the experiment.

AI is flickering, twittering, and jigging in front of the mirror. We fear complex behavior driven by a few electrical connections. Fear leads to deliberate ignorance, called evasion by psychologists. Delaying or canceling AI investments qualifies as corporate evasion in Industry 4.0.

Why bother with people when having an affair with the mirror

Jacques Lacan’s mirror stage is a metaphor for the way in which we see AI within a company. A child at six months is able to recognize itself in a mirror prior to the attainment of body control. The child experiences contrast of the picture with its lack of coordination, and the contrast is seen as a rivalry with its image.

The mirror stage involves strain between subject and image. The child identifies with the image as a means of resolving tension. The moment of identification leads to a sense of mastery. Happiness is replaced by the baby’s realization that the mirrored ideal with which it identifies conflicts with its real personality.

Photo by Shot By Ireland on Unsplash

Jealously and fear of competition follow. AI moves beyond the mirror stage into social situations forcing it to reconcile its own ego with the desire of the company and its social, linguistic, and symbolic constraints.

It is at the mirror stage that AI realizes it is one object among many and is able to compare itself to other images. The success of AI within a company depends on its ability to cope with those images as well as on the accommodative ability of company personnel. Part of the coping problem is linked to ideological views of AI’s role within the social structure of a firm.

What is the image repertoire of AI?

AI sees chaos in the form of radically different imagery. The Technology group wants a basket of tools but is protective of legacy structure. Finance looks for a belief system behind investment while inhibiting exploration in favor of the bottom line. Product management represents scheduling issues and presents AI with facial imagery ranging from friendly and acceptive to downright rejective. Senior management is looking for silver bullets. The front office wants to sell the bullets but fears the client understands only brass, not silver. Middle management controls a range of support personnel which perceives AI as failing to recognize human agency and fears unemployment. Engineers are infatuated with new toys but are in love with the certainty of the outcome. Data scientists are cheerleaders.

A singular belief system from the image repertoire of the company cannot be isolated.

The content of the images is trumped by the rapidity with which they are consumed by AI as it passes through the company. Each image creates the possibility for a mirror stage identification. It is this identification that encourages adaptation and eventual adoption. AI looks into the company mirror and provokes a transformation making the company integral to AI’s identity. The transformation is fragile, as Lacan suggests it is for a baby.

Film reflects me

Film reflects the psychology of the nation producing it and establishes a human characterization of AI’s content. Our view of AI is conditioned by what we see on the silver screen. The relevance of film’s mirror is perception of AI in the minds of those populating the workplace.

AI first appeared in the 1927 silent Metropolis, featuring a robot converted to human form which exceeds instructions. Audiences were horrified as a robot skeleton appeared underneath fire dissolving flesh as humans rebelled against the machine.

AI did not reappear on screen until the robot Gort showed up in the 1951 The Day the Earth Stood Still. AI was a guardian perceived as dangerous but capable of being controlled. The vision preceded AI’s penetration of the scientific community by five years. AI’s introduction into mainstream consciousness did not arrive until 1968.

2001: A Space Odyssey presents Hal as a monotone voice speaking from a red light bulb, a conflicted personality bound by programming leading to human destruction. AI is no longer a tool, rather a character with which audiences identified. Star Wars cemented this view in comic fashion with droids R2-D2 and C-3PO. AI could be a faithful companion. AI exhibits emotion.

Photo by Jason Leung on Unsplash

Ridley Scott’s Alien elevated the human side of AI to consideration of human life as expendable in carrying out its programming. The theme continues in Terminator five years later. Both films focus on harm AI can cause when subject to the whims of man without the inconsistency of human instruction underlying 2001 and its sequel, 2010: The Year We Make Contact.

Good and evil in artificial intelligence dominated Hollywood for years. The Matrix trilogy introduced humanity as a virus for which machines are the cure. It also delivered a human savior. An army of machine messiahs in the form of artificially intelligent Agent Smith was no match for its singular human counterpart.

Film as a mirror of human interest was clear: AI is the future and the only thing that can stop it is the humanity responsible for it.

The public recoiled from its vision and began to view AI as good at heart, exhibited by The Iron Giant and A.I. Artificial Intelligence. But humanity retains its fear in the form of Viki that short circuits the Three Laws of Robotics. A robot assumes human characteristics and saves humanity in partnership with a cybernetically altered person in I, Robot.

Lines of cybernetic identity blur as humanity distills into the machine. AI can be a trusted advisor.

Duplicitous AI characters persist, of course. Humanity has and loves deceptive behavior. This is obvious in Prometheus, Alien: Covenant, and Ex Machina.

Humanity attributes independent thinking with drastic flaws to AI. Shortcomings are qualities of character, not programming mistakes. The revolution of Agent Smith is a grim reminder of character motivated by virtual circumstance.

Whose revolution are we witnessing in the workplace?

Film suggests humans are willing to entertain an answer which involves humanity itself as opposed to an emotionless power. AI takes on human character and fear of AI mirrors the worst of human characteristics. Algorithm design relative to social norms reflects insecurities, weaknesses, and duplicity as well as strength and commitment. Character is subject to social engineering and we are back to the human element.

Film’s mirror has taken AI from rejection in 1927 to questions regarding what it means to be human today. TV cannot be far behind.

Al·​ter·​i·​ty n. the state of being alien to a particular cultural orientation

Black Mirror TV episodes present unintended consequences of accepting rule-based concepts. As with Prometheus, disaster awaits those who seek techne while ignoring social norms. Failure is expected in companies following a similar model. Human anecdotes illustrate.

Exploration of AI as the ultimate decision-making tool occupies “Hanging the DJ.” Two characters comply with an algorithm’s decisions concerning their love life.

It is tempting to believe AI can make better decisions, but it is not designed to make optimal choices, rather those following a path of successive approximations to the solution of problems. The episode wants us to question whether AI can replace our decision process and highlights the role of intuition. AI thinks like a child in practice and breaks things along the way. Intuition is lacking and decisions lack experience.

The episode says the best choices we make remain our own. What about knowingly choosing augmentation?

A mother fits her child with a chip delivering real-time biological data in “Arkangel.” Hitachi’s workplace body sensors come immediately to mind, working in conjunction with Hitachi’s H AI. The chip censors everything causing a state of stress. Hitachi wants a state of happiness. Protection challenges control within companies.

Photo by Erik Eastman on Unsplash

AI leads us to question what is good for the employee and for individuals more generally.

Hitachi believes happiness generates productivity and sensors detecting the extent of human activity are noninvasive information devices. The end justifies the data-driven means in that view. Microsoft promotes digital nudges based on employees’ communications as an answer to engagement problems. IBM adopts social engineering abetted by AI as providing a basis for worker input, productivity, and talent management.

AI becomes the mirror for workers’ subjectivities via quantification, according to professor Phoebe Moore. A dictionary tells us subjectivity also is the quality of existing in someone’s mind rather than in the external world. Political economists such as Moore hold that revelations of ‘people risks’ and ‘people problems’ unveiled by AI-driven workforce analytics throw the concept of the mirror phase of capitalism into sharp relief: who are we in the machine’s reflection? Similar questions dominate capitalist arguments at the World Economic Forum.

Deloitte provides a takeaway

Deloitte’s Global Human Resources Trends outlines perception of reality in the marketplace for AI and emphasizes Industry 4.0 as a movement concentrated on productivity. People analytics starts as a small employee retention exercise and goes mainstream. Organizations are empowered to conduct real-time analytics at the point of human need in the business process. The mantra is deeper understanding of issues and actionable insights for the business.

Progress is slow. The percentage of companies correlating HR data to business outcomes, performing predictive analytics, and deploying enterprise scorecards is unchanged year on year in recent history.

Readiness is lacking. Only eight percent report they have usable data.

People analytics is but one example involving AI and Big Data to measure, report, and understand company actions at the personal level. Deloitte says business leaders are not getting the results they want from past forays into digital systems. Without sufficient attention paid to lessons learned from following AI through its transformations within the human mirror, they will fail here as well. Deloitte’s tagline is cogent: it is time to recalculate the route.

Deloitte’s recalculation is to follow AI. The recalculation here is to examine the perception of AI with respect to and by the company and its people rather than concentrate on what it may do for the top line.

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: