New mitigation framework reduces bias in classification outcomes
We use computers to help us make (hopefully) unbiased decisions. The problem is that machine-learning algorithms do not always make fair classifications, if human bias is embedded in the data used to train them — which is often the case in practice. To ease this "garbage in, garbage out" situation, a research team presented a flexible framework for mitigating bias in machine classification. Their research was published Apr. 8 in Intelligent Computing, a Science Partner Journal.
Existing attempts to mitigate classification bias, according to the team, are often held back by their reliance on specific metrics of fairness and predetermined ...
















