Can Artificial Intelligence Be the New Judge in the Future?

hello world justice

“The algorithm will always give exactly the same answer when presented with the same set of circumstances. (…). There is another key advantage: the algorithm also makes much better predictions.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

When we consider the dark side of algorithms and AI models, we always think about justice; The AI model is usually optimized for efficiency and profitability, not for justice (also see my previous post: Justice: What’s the Right Thing to Do in Data Science?). What is justice, by the way? Justice is the quality of being just or fair. Then what is “just” or “fair”? Defining the word “justice” or “just” is still an arguable issue in our society. In a slightly different context, we may say about “fair” instead. Fairness, in a narrow sense, requires consistency; The same input should produce the same output. For example, if you and I write down the same answer on the test, we should get the same score. That is the starting point to discuss fairness.

The (AI) algorithms which encapsulate detailed mathematical formulas have such fairness inherently, leading to a consistent consequence (the same input, same output). This is a big advantage of the algorithm for finding someone’s guilty consistently. Furthermore, the prediction is much accurate than human’s prediction. However, the consistency may also occur consistent error until the algorithm is adjusted. Humans, on the other hand, have their own models in their minds to judge someone’s guilty. However, this is not based on mathematics (or rigorous reasoning), leading to inconsistent outcomes for the same circumstances. Also, it is really hard to correct their bias and prejudice while the algorithm is easy to adjust their parameters for correction. Then who is more righteous in terms of fairness? 

Justice: What’s the Right Thing to Do in Data Science?

“The model is optimized for efficiency and profitability, not for justice or the good of the “team”. This is, of course, the nature of capitalism.”

[Weapons of Math Destruction, Cathy O’neil]

Michel J Sandel’s magnum opus, Justice: What’s the Right Thing to Do?, called our attention to justice (and fairness) in a period of prosperity of capitalism. Data science acts in a similar fashion of capitalism. More data (money) is more powerful and the efficiency (profitability) is the most important factor for its success. Hence, in Data Science, we should consider that fairness and efficiency (and profitability) are compatible.

To take fairness into the consideration in data-driven models, we need to think over what we can do. First, we should double-check that our data are unbiased. Specifically, historical data are often biased due to different historical backgrounds. So when combining long-time history data, we need delicate effort to eliminate hidden bias. Moreover, we add “fairness” to the main objectives in data-driven models directly. Here, we have the problem of how to quantify fairness (also justice and morality). Hence, it is still challenging to make the fair model but it is not impossible.