Building Bigots

Original Source Here

Ever met Microsoft’s friend Tay? For your sake, I hope not.

In 2016, Microsoft released a chat bot named Tay on the Twitter platform. Tay stands for “Thinking About You” and was designed to represent a 19-year-old American girl who would learn how to interact with humans by interacting with them via Twitter posts.

The result? Some users taught the bot to interact with others using racist and sexually-charged tweets. In other words, Tay learned to become a bigot.

And Tay is not alone. Tay is part of a class of models known as generative models, models that generate novel outputs from learned inputs. Generative models that have been trained on existing internet data of one flavor or another are demonstrating that they learn bias too. Indeed, more recently the mega-language model released by OpenAI, GPT-3, has also been caught saying some pretty negative things about Muslims.

And generative models that use the internet are not alone in their bias. Indeed, an increasing number of examples abound that demonstrate how machine learning models are biased in their outputs. @Terence Shin covers a few examples here that show AI bias in healthcare, crime, and hiring practices.

But understanding how bias leaks into our data science practices is just one side of the coin. In addition, AI itself can further bias us. Indeed, we see information attacks where AI uses people’s own confirmation biases against them to further spread conspiracy theories and misinformation.

And these examples of machines learning to be bigots or further stoking the bigots within us wouldn’t be such a big deal if they were only affecting a few. But they’re not. Andrew Ng once noted that AI was the new electricity and we are currently witnessing it’s ubiquity being realized.

What all this means is that AI is scaling our human biases in ways that are becoming increasingly problematic. It is for these reasons that we need to inspire a new generation of data scientists who can address these biases and search for ways to overcome them.

Luckily, the idea that people are biased is not new. In fact, there is an entire academic discipline dedicated to the study of human bias, psychology. A paper by WomeninAI starts the conversation by championing the role of psychology in understanding the full breadth of AI’s potential impacts.

I want to go a step further and propose that social psychologists more specifically have spent decades researching and understanding human bias. Theories abound that help us to explain how bias arises, what factors exacerbate it, and what the consequences are when it is applied.

And it doesn’t just stop at understanding human bias.

To that end, I propose a Social Psychology of AI and explore potential topics that work to bridge the theories of social psychology with the reality of AI’s applications. It’s time to break down the building of bigots to build up stronger more ethical machines than our current trajectory is propelling us towards.

Why Data Science & AI Need Social Psychology?

When most people think about social psychology, they think about a field that studies the ways in which people influence us in social situations. But social psychology has a much broader definition:

Social psychology is the scientific attempt to understand and explain how the thought, feeling, and behavior of individuals are influenced by the actual, imagined, or implied presence of others (Allport, 1954)

Notice how the definition includes both the imagined and implied presence of others. Armed with this definition, it is difficult to consider how our psychology is not inherently social in the first place. Even when we find ourselves alone, we are still influenced by others.

But AI is not an “other” in the literal sense of what we often mean. It does not live, breath, or feel. So why then should we assume that similar principles apply when dealing with AI?

Not only do people influence us when they are not physically present, “they” also don’t need to be people. More to the point, we have a strong tendency to attribute agency to inanimate things. How many of you have yelled at your computer for taking too long to load an application? Or maybe you use gendered pronouns to describe your car? Attributing social characteristics to inanimate objects is so commonplace, many psychologists argue it is natural.

There is a pretty good explanation for this tendency too. From an evolutionary perspective, it is far safer for us to assume that the rustling in the bushes is due to the presence of a bear or some other threatening, agentic being, than assuming it’s the wind. The latter ensures our survival.

Because the whole point behind AI technology is to build human-like intelligence into applications and physical objects, those same technologies are even more likely to be doing things that “fool” us into thinking they are alive. Thus, we need a social psychology of AI because AI is expanding our social world in new and complex ways.

What Should a Social Psychology of AI Cover?

If you’re still not convinced, perhaps laying out some ideas on content will help to further demonstrate how social psychology can contribute to our understanding of evolving AI technologies. Foregoing, I lay out some initial ideas organized around two major themes.

The first theme is the more obvious theme and focuses on topics addressing how our social psychology impacts the way in which we build AI technologies. This theme specifically tackles the problems of human bias and how that bias is being scaled by AI technologies. It also includes addressing how social psychological theory may inform how we go about building new capabilities into our AI.

The second theme is less obvious but still just as important given the increasing integration of AI into our daily lives. This theme focuses on core topics in social psychology that tackle how AI may affect our own thoughts, feelings, and behaviors. In other words, it is not just that we are building bigots as we fail to understand how our human biases enter AI solutions, but those same solutions interact with other humans and should thus be expected to also affect those people.

Image by author

Theme 1: How Our Social Psychology Affects AI

Topic 1: Model Bias

The examples that launched this article, make it clear that our AI models are learning our human biases. Social psychologists are trained to understand how human activities, like data collection, are rife with bias. Simply deciding that a problem needs a model in the first place is a form of human bias. Thus, understanding those biases and their potential sources requires more diversity in the field of data science. Social psychologists can help data scientists to understand how our human tendencies can bias the very data that we collect and may also be able to identify strategies for overcoming those biases.

Model bias is a very hot topic in the data science industry right now and the work of computer scientists like David Sontag at MIT are helping us to understand how to detect such bias mathematically. But these approaches cannot address the fundamental biases that go into the very collection of that data and so social psychological theory should also be required to capture the severity of the issue more fully.

Relevant social psychological concepts include understanding bias in research methods, theories of prejudice and discrimination, and diversity.

Topic 2: Intergroup Conflict

Building biased models that, when applied, can quickly scale to impact millions of people is often unintentional. Building models to make decisions that intentionally harm millions of people is a more intentional application of human bias. Building AI to intentionally harm is not science fiction either, just look at the use of drones in some of our most recent conflicts .

Social psychologists have long studied intergroup conflict and theories such as realistic conflict theory, social dominance theory, and social identity theory can help us to understand the potential impacts of designing AI to help manage social group interactions.

Topic 3: Artificial Social Learning

Moving beyond the current state of AI trends, social psychologists may also be able to make contributions to more future-state AI developments. We already know that researchers have built models based on the fundamental concepts governing classical and operant conditioning with the use of reinforcement learning. Going forward, mathematical advances may be able to learn from social learning theories such as those developed by Albert Bandura, Lev Vygotsky, and Carol Dweck. In other words, what should we consider as we teach machines to learn from other people or other machines through observation? Social psychology can help answer this question.

Topic 4: Artificial Emotion

Similar to social learning, there is also considerable interest by researchers to understand how machines can learn to interpret human emotion. In a rudimentary sense, sentiment analysis in the area of natural language processing has worked to develop this capability. More complex mediums are also being developed that help to interpret human emotion in video signals and voice.

The social psychology of emotion helps us to appreciate and understand the powerful role that norms, group memberships, and culture all play in our emotional expressions. Therefore, social psychology promises a more complete view of what it means to develop machines that understand human emotion.

Theme 2: How AI Affects Our Social Psychology

But it is not just we who affect how AI develop. AI can also affect how we develop. The increasing presence of AI in our lives makes this less studied area equally as important. And social psychology has a lot to offer in terms of helping us to better understand what those impacts might be.

Topic 1: Existential Threat

One of the most immediate ways I am experiencing AI’s affect on people, is as an existential threat. And I am not referring to the type of existential threat portrayed by The Terminator’s Skynet or dire warnings from influencers like Elon Musk pontificating on what AI “could” become. No, I am referring to the more immediate existential threat that everyday people experience when they see AI automate them out of a job.

The threat is real, though maybe over-hyped a bit, and social psychologists may be able to help identify how to manage this threat. For example, terror management theory explains how people respond when faced with threats to their livelihoods. Accordingly, TMT proposes we use both self-esteem and cultural worldviews to protect ourselves from such existential threats. As AI works to automate more people out of the very activities that help to give their lives meaning, we should expect new worldviews to take shape, perhaps even new definitions of what it means to be human.

Topic 2: Artificial Relationships

Existential threats aside, on a more positive note AI also introduces new ways for humans to form relationships be it through augmented reality or directly developing relationships with artificial beings. For example, in the movie Her Juaquin Phoenix plays a man who develops a relationship with an AI voice assistant. Together the two explore boundaries of human emotions that may have never been accessible to the man because of his social anxieties and insecurities.

Social psychologists understand relationships, and theories such as attachment theory, social exchange theory, and interdependence theory each may help us to understand how, why, and whether we should use AI to form relationships.

Topic 3: Emotion

Closely tied to relationships is the concept of emotion. Just as understanding emotion is important for developing AI models that can detect and understand human emotion, so too is it important to understand how AI can also shape our emotions. Recall those AI responses generated by Tay and GPT-3 about disadvantaged social groups. Now consider that there are literally billions of people who represent those social groups and many of them will no doubt read about these experiments and their results. How these individuals perceive these experiments, how they assign attribution for their results, will no doubt affect how they feel about themselves.

Topic 4: Prejudice & Discrimination

And not only do bigoted bots have the potential to hurt people’s feelings, but they also have the potential to further justify the prejudices held by others about members of those groups. Social psychologists can help us to answer questions of the perceived legitimacy of AI outputs.

Moreover, as AI advances do we as people begin to free our own prejudices against them because after all, they’re just machines? The consequences of freely expressing prejudices, even against inanimate objects, may have far reaching consequences for how we relate to one another.


To conclude, this article is the beginning. The beginning of a call to action for data scientists and social psychologists to join forces and work together to share knowledge across domains. To tackle some of the most pressing ethical, societal, economic, and environmental challenges we face today in a way that is responsible and ultimately more powerful for the future of AI.

Like engaging to learn about data science, career growth, or poor business decisions? Join me.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: