> And if those results change, so will the algorithm's outputs! But asking the algorithm to make the change seems to be a bit much.
The problem is that the output of those algorithms is used to drive decision making that has the effect of maintaining the status quo, by removing opportunities to change it.
The real problem here is a deep political schizophrenia in modern society, or at least parts of it, which demands decisions be deliberately biased towards the outcomes they politically desire. These people then turn around and describe results that are not biased as "biased", which is utterly Orwellian.
I think your comment shows that you understand this. You accept that a decision may be correct, when measured in totally cold and statistical terms. But such decisions would not "change the status quo" and that would be a problem.
But that position is a deeply political one. Why should decisions at banks, tech firms, or wherever be deliberately biased to change the status quo? It's social engineering, a field with a long and terrible track record of catastrophic failure. Failure both to actually change reality, and failure in terms of the resulting human cost.
Injecting bias into otherwise unbiased decisions by manipulating ML models, or by manipulating people (threatening them if they don't toe the line), is never a good thing.
Maintaining the status quo is also a political position, though. In general, there's simply no way to interact with other people at scale without politics coming into play. It can be inadvertent, in a sense that there was no specific intent for "social engineering" - but if one's ethics prioritizes outcome over intent, it doesn't really matter.
Which loops back around to my original point, which is that the notion that we should alter or influence the algorithm because its output does not match our worldview or politics is not the removal of bias it is the deliberate injection of our personal biases.
The whole point of using the algorithm was to make sure personal biases aren't impacting the decision. If we're going to alter the algorithm because we don't like the result, then why are we bothering to use an algorithm in the first place? Just use a human to make the decision. At least in that scenario potential biases have an identifiable source, as opposed to an opaque program that may have been made by engineers that deliberately tuned it to avoid any disparities because they think any disparity in outcome is fundamentally problematic
The problem is that the output of those algorithms is used to drive decision making that has the effect of maintaining the status quo, by removing opportunities to change it.