Dear ML reading group,
This week we will have Deborah presenting the paper Not So
Fair: The Impact of Presumably Fair Machine Learning Models
(paper
here).
The reading group will take place on Wednesday at 2:30pm as
usual, and this time we will host physically at room 0.01 of
building E1.7 @ UdS.
For those attending remotely, you can find the Zoom link in the reading
group spreadsheet.
Abstract:
When bias mitigation methods are applied to make fairer machine learning models in fairness-related classification settings, there is an assumption that the disadvantaged group should be better off than if no mitigation method was applied. However, this is a potentially dangerous assumption because a “fair” model outcome does not automatically imply a positive impact for a disadvantaged individual—they could still be negatively impacted. Modeling and accounting for those impacts is key to ensure that mitigated models are not unintentionally harming individuals; we investigate if mitigated models can still negatively impact disadvantaged individuals and what conditions affect those impacts in a loan repayment example. Our results show that most mitigated models negatively impact disadvantaged group members in comparison to the unmitigated models. The domain-dependent impacts of model outcomes should help drive future bias mitigation method development.
Cheers,
Adrián