Happy Thanksgiving, Happy Thanksreading

Happy Thanksgiving, Happy Thanksreading.

[Everybody makes DATA]

This year, I started to read many books about mathematics, statistics, and data science intensively and also started to write reviews on my blog. To read and write became a big turning point in my life. I hope I keep writing good reviews on this blog.
I would like to thank you for visiting my blog this year!

Trade-off: The Quality or The Quantity of Data for Better Statistics

the art of statistics quality

“When we want to use the data to draw broader conclusions about what is going on around us, then the quality of the data becomes paramount, and we need to be alert to the kind of systematic biases that can jeopardize the reliability of any claims.”

[The Art of Statistics, David Spiegelhalter]

In the age of Big Data, we can collect tremendous data from many sources that have different qualities (e.g. accuracy, resolution, or fidelity). Using all the data we can easily draw statistics about what we measured. These statistical results help us to understand what is going on by comparing the previous statistical results. However, if we want a deep understanding of hidden patterns for accurate future prediction (statistical inference), the quality of data becomes the main factor for accurate prediction; higher quality, higher accuracy. Collecting data, however, has a general trade-off between the quality and the quantity. High accurate data require expensive data acquisition costs (e.g. expansive measurements, fine-scale simulation using more computer resources) while less accurate data are relatively cheap to obtain.

As the book mentioned, the data-driven predictive model totally depends on the quality (and the quantity) of data. First, we check the accuracy (or fidelity) of data and use only the high-fidelity data to make a data-driven model for decision or prediction. Due to the aforementioned trade-off, however, we generally have a few high-fidelity data and/or many low-fidelity data. Then how to make a data-driven model? since a few high-fidelity data provide only partial information, it is hard to make an accurate model globally. the use of many low-fidelity data enables us to make a global model but it has a systematic inherent bias, leading to a wrong prediction. Hence, in data science, many researchers have focused on multi-fidelity data fusion, which enables us to make an accurate global model using both high and low fidelity data; chasing both the quality and the quantity.

Are There a Few Magic Numbers for Describing Complex Systems?

The art of statistics few

“Large collection of numerical data are routinely summarized and communicated using a few statistics of location and spread, (…), these can take us a long way in grasping an overall pattern.”

[The Art of Statistics, David Spiegelhalter]

Can we understand all (fine-scale) patterns from a massive data set? If you were a genius, you may keep track of all the patterns. But, it is (almost) impossible to analyze all. That’s is why we employ statistics to understand and analyze a large data set and predict/estimate the future from statistical results (e.g. population, economic growth, the unemployment rate, or stock price). For example, to make a business model for kids, it is much easier to see the average birthrate in some regions rather than count the number of children in my neighborhood. Statistical approaches always provide just a few numbers to describe the complex systems. This simplification enables us to make a simple (predictive) model, leading to an efficient and optimized analytics.

I agree that a few numbers make the complex system simple and I have experienced that this simple representation gives us the proper direction to make a better decision. Then, what is the good “number (statistic)” for massive data in our hands? The average? well, but the book also said: “there is no substitute for simply looking at data properly.” Hence, we should be careful to understand the complex system using only a few statistics. Some statistics are venerable to outliers such as average. Also, we can draw the dinosaur patterns using the given mean and variance (please see my previous post). Nowadays, data-driven approaches via statistical learning (machine learning) may provide optimal numbers to describe the complex system effectively. Yet, we need to scrutinize all the statistics the data-driven models provide. However, I do expect that a data-driven AI model may find a good reparameterization of the massive data set for a better understanding in the near future.

Framing: Statistics Can Manipulate Our Thought

framing

“The examples in this chapter have demonstrated how the apparently simple task of calculating and communicating proportions can become a complex matter.”

[The Art of Statistics, David Spiegelhalter]

Thanks to you, the number of followers increases by 22% in November! When you see this sentence, you may think that this emerging blog is growing rapidly and there are some reasons for this success. If I wrote “4 people start to follow my blog in November”, you might have a different feeling. But both are true: my blog has 18 followers in October and now 22. Different representations in statistics can change the impact of observations, we called this positive (or negative) framing. There are many examples of positive or negative framing. For example, pharmaceutical companies want to say that a new medicine has a 95% survival rate rather than a 5% mortality rate (positive framing). Investigative journalists want to say that 3,000,000 people are suspected of tax evasion every year rather than 1% of people (negative framing). This framing also appears in the graph. Assume that we need to draw a bar chart with two bars whose values are 95 and 98, respectively. If we draw a bar chart from 0 to 100, the two bars look similar. However, if we draw a bar chart from 90 to 100, we see totally different bars on the graph.

How can we escape from this framing? Information providers should provide alternative data representations (different graphs, law data, tables) so that we can get a balanced view of the data by examining raw data. Also, we always should be skeptical when we see data. First, we should check who (and why) published statistical data; data do not lie, only presenters may lie. However, this argument does not refer to that statistics are totally crafty tricks. Statistics is still powerful to understand, analyze, and visualize data effectively. Moreover, in the age of Big Data, statistical knowledge is fast becoming the main tool to deal with big data correctly. That is, statistics are a double-edged sword; the power of statistics depends on us.

[Wrap up] Book Review: Hello World: Being Human in the Age of Algorithms

There’s an old African proverb that says “If you want to go quickly, go alone. If you want to go far, go together.” Big Data and artificial intelligence based on algorithms herald a new era of our people. Then what is the main role of humans in the age of algorithms? How can we go far with algorithms together? The author, Hannah Fry, considers the pros and cons of the age of algorithms through various examples. Also, the author provides a balanced view of the AI Utopia and Dystopia, enabling the readers to think about the real future.

This book is a modern version of the fable: “The lame man and the blind man.” The author emphasized that both algorithms and humans are flawed like the lame man and the blind man. Hence, we go together with algorithms for the better world. we try to use the power of algorithms properly. Then, what can we do? The author said in the last paragraph: “By questioning their decisions; scrutinizing their motives; acknowledging our emotions; demanding to know who stands to benefit; holding them accountable for their mistakes; and refusing to become complacent.”

The following links are some quotations from the book with my thoughts.

(1) Who Does Make It a Rule? Human? or Machine?

(2) Who Is our Future AI and What Is our Role?

(3) Digging Data in the New Wild West

(4) Can Artificial Intelligence Be the New Judge in the Future?

(5) As Algorithms Becoming Intelligent, We May Become Unintelligent

(6) Finding the Cause from the Effect in the Age of Big Data

(7) Can Algorithms Make a Way for Creativity?

Can Algorithms Make a Way for Creativity?

hello world art

“Similarity works perfectly well for recommendation engines. But when you ask algorithms to create art without a pure measure for quality, that’s where things start to get interesting. Can an algorithm be creative if its only sense of art is what happened in the past?”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

Experiments in Musical Intelligence (EMI) by David Cope produced similar music (but not the same) from their music database by algorithms. This was a new algorithmic way to compose songs keeping the typical composer’s style. In October 1997, such algorithms compose the new song similarly to Johann Sebastian Bach. Audiences did not distinguish this music from genuine Bach music; the algorithm can compose a new (qualitatively) masterpiece of Bach without any composing skills and inborn musical talents. Nowadays, there are more sophisticated AI models to generate (I would not like to say “compose” here) songs we will like using the past famous and popular songs. Can we say these algorithms are creative?

Due to vague definitions and various preferences, it is really hard to measure the popularity (and the beauty) correctly. So, the AI models mimic the success of previous masterpieces and generate similar (this is a vague word but I would say “without infringing copyrights”) music, paintings, drawings, and even novels. Many people think that this is just mimicking of previous artworks but Pablo Picasso said: “Good artists borrow; great artists steal”. Nobody makes creative things out of nothing. All the artists are inspired by the previous masterpieces and make their artworks based on this inspiration as the AI models did. However, Marcel Duchamp, in 1917, introduced his magnum opus “Fountain”, which is the first readymade sculpture via a porcelain urinal. Can algorithms also make these kinds of creative art? If this artwork reflected the philosophy about the art of the time, can the algorithm find the current philosophy about the art from Big Data and make the creative artwork?

Finding the Cause from the Effect in the Age of Big Data

hello world inverse

“Just as it would be difficult to predict where the very next drop of water is going to fall, (…). But once the water has been spraying for a while and many drops have fallen, it’s relatively easy to observe from the pattern of the drops where the lawn sprinkler is likely to be situated.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

In science, an inverse problem is one of the research fields to extracts the hidden law (or the mathematical formula) from observation (data). That is, the inverse problem is to find the “cause” from the “effect”. It is a similar concept of profiling a serial killer in criminology. Through all the data of victims, we anticipate the character of the serial killer. We agree that more victims make an accurate prediction of the serial killer BUT we don’t want more victims. So, the important part of the inverse problem is to find the appropriate formulation from a small data set. However, as you see the quote, it is really hard to estimate something accurately with small data. This issue has been a bottleneck of the development of an inverse problem.

In the age of Big Data, on the other hand, we collect massive data set from individuals, autonomous systems, efficient measurements, or online websites, leading to accurate prediction of the cause. So many people thought that it is easy to solve the inverse problem using massive data; that is somewhat true and many research achievements about data-driven modeling that finds the underlying laws or governing equations (or a black box model) to describe the cause and effect directly from data. However, the inverse problem is now struggling with another issue – finding “right” causality. In big data, improbable things happen all the time. This may lead to the wrong causality of input/output data. For example, there is a possibility that the correlation between two variables stems from just coincidence but the algorithm cannot distinguish this coincidence and the real causality. Hence, the human check the data-driven causality based on rigorous way. That is why the fundamental mathematics/statistics are becoming important in the age of Big Data.