Data Bias Identification and Mitigation: Methods and Practice

By Andrea Gao, Maximiliano Santinelli, and Steve Mills

This is the second article in a series on data bias and identification. For additional background, please refer to the first article, Getting to the Root of Data Bias in AI.

In our first article on data bias, we noted that bias can appear at any stage in the dataset lifecycle — during creation, design, sampling, collection, and processing. We also provided a few general ways to reduce such bias. In this article, we explore in greater depth a variety of techniques that can be applied to mitigate historical data bias. In the spirit of going beyond principles and theories to make Responsible AI “real,” we describe a general framework that can be used to identify and mitigate data bias. As a practical use case, we will discuss an example related to facial recognition to demonstrate application of how the framework can be applied.

A Brief History of Auto Face Recognition

Facial recognition has followed the same development pattern as most AI techniques: theoretical foundations date back to the 1960s, while at-scale applications become feasible in the 2010s with the growing availability of digital data, advanced algorithms, and computational power. Currently, facial recognition is widely used in public places, on social media, by virtual-conference software, in law enforcement applications, and in smartphones (almost every smartphone produced in recent years has some facial recognition software). This technology has brought convenience and security to users, companies, and citizens — but also raises a number of serious ethical concerns. For example, when providing virtual background services, video-conferencing applications have failed to distinguish people of color from their visual backgrounds. In other cases, digital cameras with auto face recognition functions have indicated that Asian users are blinking when, in fact, their eyes are wide open.

Most of the biases in facial recognition applications are actually the result of biases in the historical data used for training models. The negative social impact of these biases can be reduced if biased data is identified and mitigated during the data collection, model development, and application-production processes. The four steps shown below lay out our data bias identification and mitigation framework. Please note that it is important to consider the algorithm, business processes, and broader context in which technology is applied but, for the purposes of this post, we limit discussion to data bias only.

1. Know the protected group in your AI system

AI system bias often manifests in outcomes that are unfair to specific groups. That is why, when considering potential bias while designing an AI system, a data science team’s first task is to identify the protected group or groups with respect to the proposed AI system. Generally speaking, a protected group is any group of people who qualify for special protection according to a law, policy, or similar authority. More broadly, it can apply to any subset of the population distinguished by one or more characteristics. This term is frequently used in connection with decisions that pertain to financially related issues such as employment, credit, or income and benefits distribution. Any subgroup that might be treated differently or unfairly could be designated as a protected group of an AI system.

In the United States protected groups are, by law, identified by the following features: race, color, religion, sex (including pregnancy status, sexual orientation, or gender identity), national origin, age (40 or older), disability, and genetic information (including family medical history). The EU has a similar definition of protected groups.

Different countries and regions might have different definitions of protected groups. We, as AI practitioners, should always be aware of the legal and regulatory climate in which we work, as well as how creating AI products within these systems might impact relevant protected groups. Furthermore, we must always be aware of population subgroups that, while not falling into a legally defined protected group, have historically been subject to unfair or biased treatment that may manifest differently within our data sets. To that end we recommend that, during the design phase and throughout the entire development lifecycle of an AI system, designers, AI practitioners, and business owners consider:

  1. Protected groups: What legally protected groups or other subgroups might be vulnerable to unfair outcomes in the AI system. Example: In a facial recognition system, people of color and non-binary or transgender people could both receive unequal treatment (e.g., lower accuracy relative to other groups) and could therefore be designated as protected groups.
  2. Biases: What biases or unfair treatment against protected groups might be brought into the AI system during its lifecycle? Example: A facial recognition system’s accuracy or false-positive rate could be lower among members of protected groups if members of those groups are under-represented in human face databases used for model training and validation. Depending on the application of the system, this could lead to disparate impacts when such facial recognition systems are used.
  3. AI-system lifecycle: Where could bias be introduced across the lifecycle of the AI system?Example: For facial recognition systems, bias may be introduced by factors such as a biased training dataset, models that are not well-tuned and return a higher number of false-positive rates, data-drift issues that are created when fewer users from the protected group are included in the training data, or even bias among system users that result in facial images from protected groups being submitted and processed more frequently.

2. Data Collection

Facial recognition technology emerged in the late 20th century. Today, as information-processing power continues to grow, well-architected Deep Learning models now enable modern facial recognition technology to learn from image datasets that contain millions of faces. The image dataset is essentially the ground truth and main ingredient of the Deep Learning models.

The problem, however, is that most facial recognition datasets are crowd-sourced and, thus, have inherent data bias. For example, IMDB-Face¹ and Labeled Face in the Wild (LFW)² are two face-image datasets widely used to train commercial and academic models. But there are clear representation issues with these datasets: The IMDB-Face dataset has been estimated to be 55% male, while the LFW database is approximately 77.5% male and 83.5% White. Clearly, protected groups such as women and people of color are under-represented in these datasets. As we mentioned in our previous article, biased training datasets like these can lead to an AI system producing unfair outcomes.

Ensuring that the training dataset accurately represents an AI system’s target population is critical for mitigating biases against protected groups. We recommend a two-step process to achieve such accuracy:

  1. Check whether the protected groups that could be impacted by the AI system are well represented in the dataset. A protected group can be considered “well-represented” if the trained model that uses the dataset learns adequate patterns related to that group. Generally, AI practitioners can use statistical testing for this purpose. Depending on the target variable and protected groups, common testing methods include Chi-square test, z-test, on-way ANOVA test, and two-way ANOVA test. Example: If a facial recognition system will be used in a railway station for security checks, the training dataset should represent the protected groups to the degree that model is able to learn enough information about the protected groups. The number of pictures of the protected groups in the dataset should be sufficient for the model to achieve fairly similar false positive rate for protected groups as for the rest of the population. Otherwise, the facial recognition system will perform less accurately on the protected group.
  2. Compare the data quality of the protected group with that of the rest of the population. The quality of data related to the protected group might be different from that of the rest of the population. This can also lead to disparate impacts. Example: Default camera settings are historically designed to capture people with lighter skin. In human face databases, this setting can often lead to low-quality images of people with darker skin tones. Once again, low-quality data will bring bias into a model and distort its results.
  1. If a dataset does not accurately represent the target population, data bias can be mitigated with the addition of representative, high-quality data points for protected groups. Adding more informative and representative data for protected groups can help ensure equal treatment by the algorithms (e.g., equal false positive rates). Various methods are available to generate incremental data, including data resampling, data augmentation, and collection of additional data.
  2. If the data quality associated with the protected group differs from that of the rest of the population, then select data — or collect more data — that meets minimum standards. Example: Data quality can be ensured by selecting high-quality images of people of darker skin or by applying image-augmentation methods.

If there is no easy way to generate more representative data or augment data quality, AI practitioners need to mitigate the issues as part of the model training process, as described in the next section.

3. Model Training

Before AI practitioners start building models, they should have carefully explored and examined the dataset they will use for model training. Similarly, the business stakeholders and data scientists should have identified protected groups and addressed any data bias or quality issues that could impact these groups. If the data scientists are not able to address data bias in the dataset itself, they need to mitigate the bias in modeling or downstream steps by:

When designing the model, the development team should establish fairness metrics for the AI system, based on the location and business context in which the model will be used. In the railway station security check example noted above, the development team should use accuracy and false-positive rates as fairness metrics. The facial recognition model should have similar accuracies and false-positive rates of identifying people’s faces across different groups.

During training and testing processes, the development team should examine fairness metrics as carefully as they analyze overall model accuracy. They should evaluate the fairness metrics for each subgroup, and evaluate whether the model is biased against any group. From a statistical standpoint, the facial recognition accuracy should not be significantly different across groups. This can be evaluated with a z-test when the sample size is greater than 50. For example, after evaluating a number of well-known commercial facial recognition products, researchers found that:

  1. All classifiers perform better on male faces than female faces (8.1% — 20.6% difference in error rate).
  2. All classifiers perform better on lighter faces than darker faces (11.8% — 19.2% difference in error rate).
  3. All classifiers perform worse on darker female faces (20.8% — 34.7% error rate).

Using dedicated external benchmark data to evaluate model fairness can also help identify potential bias. Most training sets and test sets are not designed with a focus on bias detection, although bias-focused benchmarking datasets have recently been introduced. Facebook, for example, has built an open-source human-face dataset called Casual Conversations. This dataset, collected from paid actors, is designed to balance the distribution of the actors’ gender, age, and skin tone, and lighting conditions. Casual Conversations is a good benchmark dataset for model fairness evaluation of facial recognition and computer vision-based AI systems in the U.S.

If unfair outcomes for any protected group are found, it means that there is either unidentified bias in the dataset or that additional steps are needed to mitigate bias introduced by the model. Assuming that the data science team already exhausted options to identify and mitigate bias in the dataset, they should investigate whether they could mitigate the bias in the model output by:

  1. Using bias-aware algorithms: Various fairness-aware algorithms have been published in recent years. The majority of them use regularization or adversarial learning to mitigate bias. For tree-based models, tree-split criteria can be adjusted to reduce bias³during the modeling process. For gradient-based models, adversarial learning can be used to minimize bias against protected groups⁴. In practice, the development team has to find a suitable adjusted algorithm for their AI system, since no universal method is available to adjust every AI model.
  2. Fine tuning the decision boundary: Without changing the algorithm itself, data scientists can also mitigate bias by fine tuning decision boundaries for different groups. This method can be applied to mitigate bias in any score-based model, without affecting model specifications. For example, if an AI system outputs a score for each individual and then uses that score to make a decision, the development team might customize decision boundaries for different groups to make sure that the final decision is not biased against the protected groups impacted by the AI system.

4. AI System in Production

Once an AI system is developed and launched into production, the operational and development teams need to collaborate to ensure that bias won’t be introduced and, if it inadvertently is, that it will be quickly identified and mitigated.

In this context, a robust ML Operations platform and governance model ensures not only optimal performance but also control over bias that might emerge during use. Even if a model was tested and deemed fair when it was launched, its accuracy may degrade while in production. Such degradation in performance can have many causes, including data drifting.

In modern software development, continuous testing helps locate bugs and prevent software errors in production. Similarly, constantly monitoring data quality and fairness metrics while an AI system is in production can help prevent new biases from being introduced into the system.

We also need to highlight the importance of keeping humans in the loop. Artificial intelligence has advanced a great deal in the last decade but, ultimately, AI systems are still human-programmed software. Like any other software, an AI system has limitations and may be affected by software errors. The stakeholders (including the AI developer, owner, users, and regulators) should be alert to potential bias and, if any is found, they should act quickly to identify and mitigate it. Stakeholders should go through periodic trainings, such as unconscious bias training and Responsible AI training, to make them fully aware of potential biases and help them avoid bringing bias into AI systems.

5. Conclusions

The presence of bias in AI system outcomes is an industry-wide concern that reflects the historical biases inherent in all human decisions. As we have discussed, it is possible to mitigate bias and ensure that historical bias is progressively reduced. However, there is no single, perfect solution to eliminate it.

AI practitioners and business decision-makers have the privilege, responsibility, and tools to build AI systems that improve fairness and inclusiveness while still providing a transformational business advantage. Building ethical and socially responsible AI is not easy, but it starts with awareness of its potentially harmful effects, and thoughtful application of appropriate methods and controls to protect those who might otherwise be harmed.

[1] IMDB-Face https://github.com/fwang91/IMDb-Face

[2] Labeled Face in the Wild http://vis-www.cs.umass.edu/lfw/

[3] Raff, E., Sylvester, J. and Mills, S., 2018, December. Fair forests: Regularized tree induction to minimize model bias. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 243–250).

[4] Blake Lemoine, Brian Zhang, and M Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. (2018).

BCG GAMMA Senior Data Scientist

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store