Tabular data is prevalent in many high-stakes domains, from financial services to public policy. In these settings, Gradient Boosted Machines (GBM) are still the state-of-the-art.
However, existing in-training fairness interventions are either incompatible with GBMs, or incur significant performance losses while taking considerably longer to train.
We present FairGBM, a framework for training GBMs under fairness constraints, with little to no impact on predictive performance.
We validate our method on five large-scale public datasets, as well as a real-world case-study of account opening fraud.
Our open-source implementation shows an order of magnitude speedup in training time when compared with related work.