[Wrap up] Book Review: Hello World: Being Human in the Age of Algorithms

There’s an old African proverb that says “If you want to go quickly, go alone. If you want to go far, go together.” Big Data and artificial intelligence based on algorithms herald a new era of our people. Then what is the main role of humans in the age of algorithms? How can we go far with algorithms together? The author, Hannah Fry, considers the pros and cons of the age of algorithms through various examples. Also, the author provides a balanced view of the AI Utopia and Dystopia, enabling the readers to think about the real future.

This book is a modern version of the fable: “The lame man and the blind man.” The author emphasized that both algorithms and humans are flawed like the lame man and the blind man. Hence, we go together with algorithms for the better world. we try to use the power of algorithms properly. Then, what can we do? The author said in the last paragraph: “By questioning their decisions; scrutinizing their motives; acknowledging our emotions; demanding to know who stands to benefit; holding them accountable for their mistakes; and refusing to become complacent.”

The following links are some quotations from the book with my thoughts.

(1) Who Does Make It a Rule? Human? or Machine?

(2) Who Is our Future AI and What Is our Role?

(3) Digging Data in the New Wild West

(4) Can Artificial Intelligence Be the New Judge in the Future?

(5) As Algorithms Becoming Intelligent, We May Become Unintelligent

(6) Finding the Cause from the Effect in the Age of Big Data

(7) Can Algorithms Make a Way for Creativity?

Can Algorithms Make a Way for Creativity?

hello world art

“Similarity works perfectly well for recommendation engines. But when you ask algorithms to create art without a pure measure for quality, that’s where things start to get interesting. Can an algorithm be creative if its only sense of art is what happened in the past?”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

Experiments in Musical Intelligence (EMI) by David Cope produced similar music (but not the same) from their music database by algorithms. This was a new algorithmic way to compose songs keeping the typical composer’s style. In October 1997, such algorithms compose the new song similarly to Johann Sebastian Bach. Audiences did not distinguish this music from genuine Bach music; the algorithm can compose a new (qualitatively) masterpiece of Bach without any composing skills and inborn musical talents. Nowadays, there are more sophisticated AI models to generate (I would not like to say “compose” here) songs we will like using the past famous and popular songs. Can we say these algorithms are creative?

Due to vague definitions and various preferences, it is really hard to measure the popularity (and the beauty) correctly. So, the AI models mimic the success of previous masterpieces and generate similar (this is a vague word but I would say “without infringing copyrights”) music, paintings, drawings, and even novels. Many people think that this is just mimicking of previous artworks but Pablo Picasso said: “Good artists borrow; great artists steal”. Nobody makes creative things out of nothing. All the artists are inspired by the previous masterpieces and make their artworks based on this inspiration as the AI models did. However, Marcel Duchamp, in 1917, introduced his magnum opus “Fountain”, which is the first readymade sculpture via a porcelain urinal. Can algorithms also make these kinds of creative art? If this artwork reflected the philosophy about the art of the time, can the algorithm find the current philosophy about the art from Big Data and make the creative artwork?

Finding the Cause from the Effect in the Age of Big Data

hello world inverse

“Just as it would be difficult to predict where the very next drop of water is going to fall, (…). But once the water has been spraying for a while and many drops have fallen, it’s relatively easy to observe from the pattern of the drops where the lawn sprinkler is likely to be situated.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

In science, an inverse problem is one of the research fields to extracts the hidden law (or the mathematical formula) from observation (data). That is, the inverse problem is to find the “cause” from the “effect”. It is a similar concept of profiling a serial killer in criminology. Through all the data of victims, we anticipate the character of the serial killer. We agree that more victims make an accurate prediction of the serial killer BUT we don’t want more victims. So, the important part of the inverse problem is to find the appropriate formulation from a small data set. However, as you see the quote, it is really hard to estimate something accurately with small data. This issue has been a bottleneck of the development of an inverse problem.

In the age of Big Data, on the other hand, we collect massive data set from individuals, autonomous systems, efficient measurements, or online websites, leading to accurate prediction of the cause. So many people thought that it is easy to solve the inverse problem using massive data; that is somewhat true and many research achievements about data-driven modeling that finds the underlying laws or governing equations (or a black box model) to describe the cause and effect directly from data. However, the inverse problem is now struggling with another issue – finding “right” causality. In big data, improbable things happen all the time. This may lead to the wrong causality of input/output data. For example, there is a possibility that the correlation between two variables stems from just coincidence but the algorithm cannot distinguish this coincidence and the real causality. Hence, the human check the data-driven causality based on rigorous way. That is why the fundamental mathematics/statistics are becoming important in the age of Big Data.

As Algorithms Becoming Intelligent, We May Become Unintelligent

“There’s a hidden danger in building an automated system that can safely handle virtually every issue its designers can anticipate. (…) So they’ll have very little experience to draw on to meet the challenge of an unanticipated emergency.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

Using Google Maps, I was driving to Quebec in Canada from my home (in the U.S.) for late summer vacation with my family. Just after passing the border, I realized that my phone did not work and of course Google Maps lost their power, too. I made a desperate attempt to drive with only road signs as my dad did. Our world is fast becoming intelligent via recent developments in smart devices, algorithms, automated systems, and AI. We don’t need to remember our friends’ phone numbers and physical addresses anymore. Moreover, we don’t need to memorize the exact spelling of the longer word; Google search can show the correct results from the misspelling. Can we say that we (not the world) are becoming intelligent?

Large autonomous systems will be widespread inevitably. For example, autonomous cars will be popular in the near future. So, the next generation may not know (or experience) how to correct a slide on an icy road. This lack of experience may lead to a nasty accident when the autonomous system is not working. Technologies do more, we do less (e.g. thinking or experience). However, there are two sides to every story. Since the invention of the calculator (or the computer), we have developed new research fields such as numerical analysis, scientific computing, or computational biology, resulting in the enormous expansion of knowledge. I hope that the advent of the large autonomous system provides not only the answer to problems we are facing now but also the vision for the better future.

Can Artificial Intelligence Be the New Judge in the Future?

hello world justice

“The algorithm will always give exactly the same answer when presented with the same set of circumstances. (…). There is another key advantage: the algorithm also makes much better predictions.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

When we consider the dark side of algorithms and AI models, we always think about justice; The AI model is usually optimized for efficiency and profitability, not for justice (also see my previous post: Justice: What’s the Right Thing to Do in Data Science?). What is justice, by the way? Justice is the quality of being just or fair. Then what is “just” or “fair”? Defining the word “justice” or “just” is still an arguable issue in our society. In a slightly different context, we may say about “fair” instead. Fairness, in a narrow sense, requires consistency; The same input should produce the same output. For example, if you and I write down the same answer on the test, we should get the same score. That is the starting point to discuss fairness.

The (AI) algorithms which encapsulate detailed mathematical formulas have such fairness inherently, leading to a consistent consequence (the same input, same output). This is a big advantage of the algorithm for finding someone’s guilty consistently. Furthermore, the prediction is much accurate than human’s prediction. However, the consistency may also occur consistent error until the algorithm is adjusted. Humans, on the other hand, have their own models in their minds to judge someone’s guilty. However, this is not based on mathematics (or rigorous reasoning), leading to inconsistent outcomes for the same circumstances. Also, it is really hard to correct their bias and prejudice while the algorithm is easy to adjust their parameters for correction. Then who is more righteous in terms of fairness? 

Digging Data in the New Wild West

hello world data power

“We do well to remember that there’s no such thing as a free lunch. (…). Data and algorithms don’t just have the power to predict our shopping habits. They also have the power to rob someone of their freedom”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

There are many FREE apps for tracking your routine such as walking, jogging, eating, book reading, shopping, or studying. Thanks to these productive apps, we can check our daily routine and change our routine for better performance. By the way, how do these free apps make money? There is no free lunch in the world. They make their profit from the data you recorded. In the age of Big Data, data is the new gold and many companies are digging such gold in our daily routines now. We might say we live in the new Wild West.

Someone might think that Data is just Data. That is true but the AI model can spot important (hidden) patterns from massive data effectively. They can dig gold in the mine by efficient tools. Moreover, they make precise categories for people’s behaviors, leading to an accurate prediction (classification) for new customers. Hence, AI models are becoming more sophisticated as increasing the number of data they collect. Amazon and other online retailers provide irresistible deals and coupons every day. Netflix and other streaming services recommend the best movies we will like so we cannot help clicking the next movie. In these days, we cannot blame a shopaholic. because (internet) shopping addiction is not caused by a lack of self-control but caused by a sophisticated AI model. That is why I purchase more books on Amazon today (Don’t blame me!).

Who Is our Future AI and What Is our Role?

hello world algorithm

“This tendency of ours to view things in black and white – seeing algorithms as either omnipotent masters or a useless pile of junk – present quite a problem in our high-tech age.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

In the famous marvel movie, “Avengers: Age of Ultron”, Two different AIs appeared. The first one is JARVIS (Just A Rather Very Intelligent System) that helps the Iron man – Good AI. Another one is Ultron, the AI supervillain, to destroy the world like “Skynet” in Terminator – Bad AI. Who (or what) will be in our future world? It is hard to answer this question. A lot of books written by AI experts are divided into two forecasts; AI utopia and AI dystopia. However, all the books speak with one voice that the future of AI depends on our actions. Hence, we don’t need to forecast our future in black and white. The (real) future of AI, I believe, will be in between and will be adjustable by us.

Artificial Intelligence is a system consisting of mathematical algorithms to take action that maximizes the probability of success for the given task. It is just (complicated but) a set of algorithms, not a supernatural power. That is, there is still room for understanding it and making it good. First, we should reaffirm fundamental mathematics inside of the AI algorithm as many as we can and eliminate hidden mathematical errors (or computer bugs). Second, we feed them to unbiased and correct data so that AI makes an impartial model to decide their actions. Third, we need to set clear and socially approved objectives for AI models. The first two actions are relatively practicable but the last part requires a social consensus to make a good AI model. For example, the United Nations platform, AI for Good, has tried to offer a route for sustainable development goals. So, please think about the future of AI and about your roles for making a good AIs.

Who Does Make It a Rule? Human? or Machine?

hello world machine learning

“Rule-based algorithms have instructions written by humans, they’re easy to comprehend. (…) Machine-learning algorithms, by contrast, have recently proved to be remarkably good at tackling problems where writing a list of instructions won’t work.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

Nowadays we often hear the word “algorithm” on the news and social networks. By the way, what is the algorithm? The “algorithm” is a (mathematical) recipe to accomplish a certain task. So, your grandma’s recipe for chicken soup is, in some ways, an established algorithm. But when we say about the algorithm recently, it usually refers to a computer algorithm, a series of computer languages to solve a certain problem. There are two different types of algorithms: (1) a rule-based algorithm that follows the prescribed details by humans and (2) a machine-learning algorithm that makes its own rule by machine (computer) itself.

Who does make it a rule for a new task in the future? Humans can make a crystal clear algorithm so that anybody can check the inherent bias or errors of the new rule. Machines, on the other hand, can make a high-performance algorithm without any prior knowledge and deep understanding of the new system. In the age of AI, the power of machine-learning algorithms is no way negligible and the use of this power in various fields is inevitable. However, “Great power comes great responsibility”. So, we, as humans, repeatedly scrutinize such black-box algorithms and prevent misuse of algorithms. We should always know that the final decision should come from humans because machines have no responsibility for their decision. Also, a human should provide some important rules to machine-learning algorithms such as consideration for others, tolerance, and sacrifice, which may lead to creating not only better performance algorithms but also impartial algorithms.