Auditing Bias of Machine Learning Algorithms: Tools and Overview

Tutorial at IJCAI 2023, Macao

"Regulating Technology" by Christian Gralingen (2020)

Any publicly used technology presently undergoes an audit mechanism, where we aim to understand the impacts and limitations of using that technology, and why they are caused. As Machine Learning (ML) is becoming the pervasive technology of our time and the discourse on bias induced by ML systems is attracting attention, it is a genuine concern to understand how to audit an ML system.

In this tutorial, we discuss algorithmic developments and existing software to

  1. How to choose a compatible fairness metric to quantify bias encountered in an application?
  2. How to quantify the bias encountered in the predictions of ML systems accurately and sample-efficiently?
  3. How to explain different sources yielding the bias up to different levels of granularity?

We conclude the conundrum or open it for the audience through a case-study of auditing an ML classifier.


Part 1: Choose your fairness metric slides Code
Part 2: Estimate your fairness metric
Part 3: Attribute the fairness metric to features/input
Part 4: A case study and open problems

A summary of the demonstration is available here.


Bishwamittra Ghosh is a PhD candidate in School of Computing, National University of Singapore. His research interest is on fairness and interpretability in machine learning. He applies automated reasoning, formal methods, and statistics to improve fairness and interpretability in machine learning. His research has thrived through multiple collaborations and internships in industry and academia.
Debabrota Basu is a researcher (equivalent of Assoc. Prof.) in the Scool team (previously Sequel) of Inria at Université de Lille and CNRS, France. His research interest is developing algorithms and analysis to build theoretically-grounded responsible AI systems. Specially, he studies how to develop robust, private, fair, and explainable algorithms for online learning, bandits, and reinforcement learning problems. He is also interested in developing the statistical theory behind sample-efficiently and consitently auditing ML systems.