Accounting For Diversity In Automated Gender Recognition Systems

Original Source Here

Accounting For Diversity In Automated Gender Recognition Systems

Developments in Artificial Intelligence (AI) entail incredible progress for many fields in the near future. From AI-powered board directors, to AI-enabled life-saving medical treatments; and from virtual companions, to self-driving cars. Nevertheless, the introduction and implementation of AI in society raises a variety of ethical, legal and societal concerns, and within this context there are still many areas in which there is substantial room for improvement. A more specific practical example of such room for improvement can be found in the fact that AI systems do not always account for diversity, and this may have a detrimental impact on the lives of many individuals.

In recent years, particular concerns have been raised with regard to so-called Automated Gender Recognitions Systems (AGRS), AI systems that predict someone’s gender (and sexual orientation). This technology is believed to be fundamentally flawed when it comes to recognising and categorising human beings in all their diversity, as many of these AI systems work by sorting people into two groups — male or female. Not surprisingly, the consequences for people who do not fit within these binary categories can be severe. Moreover, these algorithms are often programmed to reinforce outdated stereotypes about race and gender that are harmful to everyone. From a legal perspective, this all has led many to wonder how we can best account for diversity in such AI systems, and whether that is possible at all?

(Mis)gendering machines

Organisations worldwide make use of inferential data analytics methods to guess user characteristics and preferences, including sensitive attributes, such as gender and sexual orientation. Generally, areas in which algorithmic gender classification is being applied include human-computer interaction, the security and surveillance industry, law enforcement, psychiatry, demographic research, education, commercial development, telecommunication, and mobile application and video games.

While Automated Gender Recognition (AGR) is not something most people have heard of, it is remarkably common. For instance, top tech companies have already invested in technology that can tag pictures of faces fed to their AI systems with binary labels such as ‘male’ and ‘female,’ along with other characteristics, such as whether they are wearing glasses or makeup. The technology originated in academic research in the late 1980s and started off with a particularly dystopian vision of the future it was creating. A subsidiary technology to facial recognition, AGR aims to algorithmically identify the gender of individuals from photographs or videos. Gender Classification Systems (GCS) are trained using a training dataset of structured and labeled data. These labels categorise data, and the features within, as either masculine or feminine, and depending on the application and dataset, vision-based methods and biological information-based methods might be used to make such inferences.

But there is a fatal flaw: the way an AI system sees gender does not always match the way a human sees it. In the case of AGRS, these systems generally rely on a narrow and outmoded understanding of gender. As a result, those who do not easily fit into the system’s understanding of what gender is — like many trans and non-binary individuals — risk being misgendered. The problem here is thus not that these systems merely fail at recognising the existence of different gender communities, they oftentimes literally cannot recognise them.

Diversity and inclusion gap in AGRS

The implications and deep-rootedness of gender and diversity considerations in practices and structural systems are to this day largely disregarded in the development of algorithms. For instance, AGRS clash with the idea that gender is subjective and internal, often leading to misgendering outcomes that may have ulterior adverse effects for large parts of the population, especially for the transgender, intersex, and non-binary community. Questions about the consequences of missing the gender and sex dimension in algorithms for society are particularly poorly understood and often underestimated, in particular regarding decisions that affect our lives significantly. In this respect, Dr. Eduard Fosch-Villaronga — Assistant-professor at the eLaw Center for law and Digital Technologies at Leiden University, The Netherlands — stresses that the global landscape of AI ethics guidelines does not seem to provide adequate guidance in addressing the potential implications of missing gender and inclusivity considerations in AI development, although this may significantly impact society. He adds that while different communities focus on diversity and inclusion in AI environments, the investigation is still very much scattered and little compared to other research strains that focus on safety and data protection. Moreover, it remains unclear how this research informs global governance efforts that aim to frame these rapid developments adequately. An example of this can be found in the recent draft law on the regulation of AI in Europe. While the proposal includes rights safeguards and curbs on certain AI technologies use within certain contexts, such as facial recognition systems, it does not ban systems that detect gender, sexuality, race or disability altogether.

While scientific research is increasingly taking gender and sex into account because it makes for better science, research conducted in the field of queer media-studies stresses that ‘sex’, ‘gender’, and ‘sexuality’ are often confused and used in overlapping ways by both laypeople and experts. Here, there are thus many different concepts interrelated, including gender, attraction, sex, and expression. These definitions are socially constructed through societal demands and norms, and obviously modern society also challenges the interplay between these definitions. As such, in the intricate relations between sex, gender, and sexuality, there are multiplicities of understanding, accepting, legalising, and including diverse societal groups by both the nation-states and political levels, large companies and corporations.

From a computer science perspective, AGRS usually take sex as a basic point of reference. To infer gender, GCS make use of gender-stereotype features, such as body movements, physiological and behavioural characteristics, facial features, and language use, to fit newly observed features in inputted data, such as gait, hand-shape, or sentiment, into either a masculine or feminine category using a trained classifier. However, classifiers trained on real-world datasets are often biased because the data used to train them is biased, containing racial and gender stereotypes. For instance, female names are more likely to be associated with family than career words, with arts more than mathematics and science. There are authors that report that the verb ‘cooking’ was found to be heavily biased towards females in a classifier trained using the imSitu dataset, amplifying existing gender biases. The same gender biases have been shown in natural language processing, another method used to support gender classifiers. If not addressed carefully, these gender biases in the offline world may propagate to AI.

The broader impacts & implications

Misgendering users via AGRS has broader, adverse implications, some of those being that these AI systems reinforce gender binarism, undermine autonomy, are a tool for surveillance, and threaten safety. As such, misgendering is particularly problematic for communities that have been historically discriminated against, and for communities for which gender is a sensitive part of their identity. Misgendering reinforces the idea that society does not consider a person’s gender real, causing rejection, impacting self-esteem and confidence, the felt authenticity, and increasing one’s perception of being socially stigmatised. For instance, consider Giggle, the “girls only” social networking app. In order to enforce its girls-only policy, the company demands that users upload a selfie to register, following which Giggle uses third-party facial recognition technology to — what it claims — ‘determine the likelihood of the subject being female at a stated confidence level’. Be that as it may, numerous studies and audits have shown that facial recognition based on AGR technology is not accurate for many people. Moreover, when the tools used to extract patterns and profiles from data are not transparent, it may be hard for people to contend any decisions resulting from this, which may impede their freedom and autonomy. On top of that, if sensitive attributes, such as sexual orientation, ethnicity, religion, or trade union membership are used for decision-making, this may result in discrimination, also from a legal perspective.

More specifically, from a legal perspective, the implications of gender inferences in AI can be assessed against a number of legal fields, including anti-discrimination law and (EU) data protection law.

Anti-discrimination law

Two international human rights treaties include explicit obligations relating to harmful and wrongful stereotyping (mainly Art. 5 of the Convention on the Elimination of All Forms of Discrimination against Women and Art. 8(1)(b) of the Convention on the Rights of Persons with Disabilities). Although states are usually the recipients of human rights treaties, the United Nations Human Rights Council has shown growing attention towards the responsibility that corporations, sectors, and industries worldwide have for respecting human rights. Still, these stereotypes persist online and offline, as if the relevant bodies failed to understand — or deliberately choose to ignore — that gender is not merely being a ‘man’ or a ‘woman,’ but a social construct.

EU data protection law

In the EU, the collection and processing of personal data is protected under the General Data Protection Regulation (GDPR), which also addresses discrimination issues in datasets (see Recital 71 of the GDPR). However, scholars note that information about a person’s gender, age, financial situation, geolocation, and online profiles are not sensitive data according to Article 9 of the GDPR, despite often being grounds for discrimination. Such discrimination can be either direct or indirect (i.e., by proxy). Because direct discrimination in data is hard to detect and indirect discrimination is even harder to detect, it can be difficult to enforce equal treatment acts and data protection legislation.

It is because of all the above that Daniel Leufer, a policy analyst at digital rights group Access Now has stressed that AGR technologies are incompatible with the EU’s commitment to human rights. Access Now, along with more than 60 other NGO’s, has sent a letter to the European Commission asking it to ban this technology. The campaign, which is supported by international LGBT+ advocacy group All Out, comes as the EU is considering new EU-wide regulations for AI. According to Leufer, this means that “there’s a unique moment right now with this legislation in the EU where we can call for major red lines, and we’re taking the opportunity to do that.”

Accounting for diversity in automated gender recognition systems

But is a ban on AGR technologies really the way forward? There are many examples of how technology has been proposed to solve inadequate engineering practice, government policy failures, or modern consumerism outcomes, showing how technological fixes have cultural, ethical, legal and political implications. AGRS may offer a good solution to recognise gender automatically for many applications, but these systems misgender users. Some researchers work on tools to counter gender bias, but despite excellent intentions these propositions miss the most fundamental aspect of gender: that gender is subjective, and that gender cannot be objectively recognised. In Johnston’s words, “modern problems cannot be reduced to mere engineering solutions over the long term; human goals are diverse and constantly changing.” From the policy perspective we can think about ways to outweigh the risks that misgendering can pose to society, but — as Dr. Eduard fosch-Villaronga stresses — we also need to question and wonder whether we want to be living in a society that is governed by algorithms that may work for the vast majority of the population but that may risk to exclude very vital parts of that very same population.

Accounting for diversity and inclusion earlier on in, for instance, the gender-target advertising or content-suggestion ecosystems could reduce bias in other systems using GCS. Having a GCS that accounts for inclusion could help reduce bias in systems in which gender inferences flow. However, the problem remains that by accounting for diversity the only thing that we are really doing is refining these algorithms even more. Then, as a society, what we need to really wonder and really need to think about is when and where do we need to stop. As technology has amazing potential for society, we should not be negative, but we should think about the priorities that serve humanity best, and how we can best focus on the development technology that can really help us, and make society a better place.

Want to learn more about the impact of automated gender recognition systems? Take a listen to The Law of Tech podcast episode with Eduard Fosch-Villaronga. In this episode, Eduard discusses the diversity and inclusion gap in AI, the inner workings of gendering algorithms, the broader impacts and implications of automated gender recognition systems, and the need to account for diversity in such systems.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: