
“The algorithm will always give exactly the same answer when presented with the same set of circumstances. (…). There is another key advantage: the algorithm also makes much better predictions.”
[Hello World: Being Human in the Age of Algorithm, Hannah Fry]
When we consider the dark side of algorithms and AI models, we always think about justice; The AI model is usually optimized for efficiency and profitability, not for justice (also see my previous post: Justice: What’s the Right Thing to Do in Data Science?). What is justice, by the way? Justice is the quality of being just or fair. Then what is “just” or “fair”? Defining the word “justice” or “just” is still an arguable issue in our society. In a slightly different context, we may say about “fair” instead. Fairness, in a narrow sense, requires consistency; The same input should produce the same output. For example, if you and I write down the same answer on the test, we should get the same score. That is the starting point to discuss fairness.
The (AI) algorithms which encapsulate detailed mathematical formulas have such fairness inherently, leading to a consistent consequence (the same input, same output). This is a big advantage of the algorithm for finding someone’s guilty consistently. Furthermore, the prediction is much accurate than human’s prediction. However, the consistency may also occur consistent error until the algorithm is adjusted. Humans, on the other hand, have their own models in their minds to judge someone’s guilty. However, this is not based on mathematics (or rigorous reasoning), leading to inconsistent outcomes for the same circumstances. Also, it is really hard to correct their bias and prejudice while the algorithm is easy to adjust their parameters for correction. Then who is more righteous in terms of fairness?


