Priberam

Seminars

FairGBM: Gradient Boosting with Fairness Constraints

Tabular data is prevalent in many high-stakes domains, from financial services to public policy. In these settings, Gradient Boosted Machines (GBM) are still the state-of-the-art.

However, existing in-training fairness interventions are either incompatible with GBMs, or incur significant performance losses while taking considerably longer to train.

We present FairGBM, a framework for training GBMs under fairness constraints, with little to no impact on predictive performance.

We validate our method on five large-scale public datasets, as well as a real-world case-study of account opening fraud.

Our open-source implementation shows an order of magnitude speedup in training time when compared with related work.

https://github.com/feedzai/fairgbm

André Cruz

André Cruz holds a Computer Science MSc from FEUP and is currently a PhD student at the Max Planck Institute for Intelligent Systems, in Germany. André's current research focus is on Human-ML collaboration and the feedback loops between deployed ML systems and society at large. In the two years prior André worked at Feedzai as part of the FATE AI research group - Fairness, Accountability, Transparency, and Ethics in AI.Max Planck Institute for Intelligent Systems