NIST proposal to identify and manage bias in artificial intelligence



Original Source Here

johnhain pixabay

NIST proposal to identify and manage bias in artificial intelligence

by Raffaella Aghemo, Lawyer

NIST, the National Institute of Standards and Technology, has recently opened a major consultation on the long-standing problem of bias in the application of algorithmic systems, a publication that is part of a series of papers aimed at fostering trustworthy Artificial Intelligence — trustworthy!

This recent publication, titled “A proposal for identifying and managing bias in Artificial Intelligence”, by various authors, Reva Schwartz — Leann Down — Adam Jonas — Elham Tabassiche, bears the code NIST1270, and is open for public comments to be submitted by September 1 next year.

NIST, which has the task of stimulating commercial and economic development in the United States, wants to take a closer look at algorithmic systems, since they represent a strong technological rise in the country but still require a lot of work to try to curb racist and discriminatory drifts that could undermine consumer confidence in new products and technologies. Indeed, it reads, “This report proposes a strategy for dealing with AI bias and describes the types of biases that can be found in standard artificial intelligence technologies and systems, and a risk-based framework for trustworthy and responsible AI.

He adds an important step when he states, “Not all types of bias are bad and there are many ways to classify or manage bias; this report focuses on biases in artificial intelligence systems that can lead to harmful social outcomes.

These harmful biases affect people’s lives in a variety of contexts causing disparate impact and discriminatory or unfair outcomes. The presumption is that bias is present in all AI systems, the challenge is to identify, measure and manage it. Current approaches tend to classify bias by type (e.g., statistical, cognitive) or use case and industry sector (e.g., hiring, healthcare, etc.) and may not be able to provide the broad perspective required to effectively manage bias as the context-specific phenomenon that it is.”

This Report also points out that the difficulty in characterising and managing AI bias is exemplified by systems built to model concepts, which are only partially observable or capturable from data. Without direct measures for these considerations, AI development teams often use proxies and indices for their highly complex calculations. For example, for ‘crime’, a measurable index, or construct, could be created from other information, such as past arrests, age and region. For ‘suitability for employment’, an artificial intelligence algorithm might rely on time spent in previous employment, previous salary levels, level of education, participation in certain sports or distance to the workplace (which might disadvantage applicants from certain neighbourhoods). It follows, therefore, that proxies used in development can either be ill-suited to the concept or characteristic being measured or reveal unintended information about individuals and groups.

Another cause of distrust may be due to a whole class of untested and/or unreliable algorithms used in decision-making contexts. Often a technology is not tested, or not thoroughly tested, before implementation, and instead the implementation can be used as a test for the technology. An example is the rush to implement systems during the COVID pandemic, systems that turned out to be methodologically flawed and biased!

Thus, the major causes of distrust of AI systems can be traced to three macro categories:

– The use of data sets and/or practices that are inherently biased and historically contribute to negative impacts

– Automation based on these biases placed in contexts that can affect people’s lives, with little or no control

– The implementation of technology that is not fully tested, potentially oversold or based on questionable or non-existent scientific data, leading to harmful and biased outcomes.

More attention is needed to ensure that we do not achieve ‘zero risk’, but to manage and reduce bias in a way that contributes to fairer outcomes that generate public confidence.

“Taking social factors into account is necessary to achieve reliable AI and can enable a broader understanding of the impacts of AI and key decisions that occur during and beyond the AI lifecycle.”

To address this issue, NIST proposes a three-step approach that mirrors the three phases of the AI life cycle:

– pre-design (where the technology is conceived, defined and developed),

– design and development (where the technology is built), and

– implementation (where the technology is used by, or applied to, various individuals or groups).

In the first phase, where planning, problem specification, basic research, and data identification and quantification take place (decisions here include how to frame the problem, the purpose of the AI component, and the general notion that there is a problem that requires or benefits from a technological solution), a small group is in control of the purpose of the technology, and therefore there is a risk that their limited ideologies and views will be reflected in the purpose of the system. NIST in this case recommends involving more diverse stakeholders who can broaden the point of view and vision at this early stage.

The second stage of the AI life cycle is where modelling, engineering and validation take place. Stakeholders at this stage tend to include software designers, engineers and data scientists who perform risk management techniques in the form of algorithmic auditing and advanced metrics for validation and evaluation. Often the selection of models based solely on accuracy is not necessarily the best approach to bias reduction. The effort to eliminate bias goes through this passage which reads as follows: “For example, ‘effective cultural challenge’ is a practice that seeks to create an environment in which technology developers can actively challenge and question steps in modelling and engineering to help eradicate statistical biases and prejudices inherent in human decision-making.”

The third phase, implementation, is where the theoretical phase and the practical, real-world phase meet. The “distance to technology” can also contribute to different kinds of performance gaps. There are gaps in intention; these are gaps between what was originally envisaged in the pre-design versus what was developed and between the AI product and how it is deployed. There are also performance gaps based on these gaps in intention. A key problem is finding a configuration that allows a system to be used in a way that optimally leverages, rather than replaces, the user experience. This is often a significant challenge as domain experts and AI developers often lack a common vernacular, which can contribute to communication problems and misunderstood capabilities.

Another important gap that contributes to the bias concerns the differences in interpretability requirements between users and developers. As discussed above, the groups that invent and produce technology have specific intentions for its use and are unlikely to be aware of all the ways in which a given tool will be repurposed.

It is concluded that:

– bias is not exclusive to AI

– the objective is not zero risk but bias management

– There is a need for bias reduction techniques that are flexible and can be applied in all contexts, regardless of sector.

– NIST plans to develop a framework for reliable and responsible AI with the participation of a wide range of stakeholders to ensure that standards and practices reflect views not traditionally included in AI development.

The conclusion then leads to assessing possible errors and distortions, not only in the design and training phase, and then waiting for ‘field deployment’ to take action, as this could lead to higher costs to restore a ‘fair service’; the important thing is to employ experts in various fields, including the legal field, who always keep in mind the purpose of the system and can follow and track its transparent and reliable use!

All Rights reserved

Raffaella Aghemo, Lawyer

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: