Who Is our Future AI and What Is our Role?

hello world algorithm

“This tendency of ours to view things in black and white – seeing algorithms as either omnipotent masters or a useless pile of junk – present quite a problem in our high-tech age.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

In the famous marvel movie, “Avengers: Age of Ultron”, Two different AIs appeared. The first one is JARVIS (Just A Rather Very Intelligent System) that helps the Iron man – Good AI. Another one is Ultron, the AI supervillain, to destroy the world like “Skynet” in Terminator – Bad AI. Who (or what) will be in our future world? It is hard to answer this question. A lot of books written by AI experts are divided into two forecasts; AI utopia and AI dystopia. However, all the books speak with one voice that the future of AI depends on our actions. Hence, we don’t need to forecast our future in black and white. The (real) future of AI, I believe, will be in between and will be adjustable by us.

Artificial Intelligence is a system consisting of mathematical algorithms to take action that maximizes the probability of success for the given task. It is just (complicated but) a set of algorithms, not a supernatural power. That is, there is still room for understanding it and making it good. First, we should reaffirm fundamental mathematics inside of the AI algorithm as many as we can and eliminate hidden mathematical errors (or computer bugs). Second, we feed them to unbiased and correct data so that AI makes an impartial model to decide their actions. Third, we need to set clear and socially approved objectives for AI models. The first two actions are relatively practicable but the last part requires a social consensus to make a good AI model. For example, the United Nations platform, AI for Good, has tried to offer a route for sustainable development goals. So, please think about the future of AI and about your roles for making a good AIs.

Who Does Make It a Rule? Human? or Machine?

hello world machine learning

“Rule-based algorithms have instructions written by humans, they’re easy to comprehend. (…) Machine-learning algorithms, by contrast, have recently proved to be remarkably good at tackling problems where writing a list of instructions won’t work.”

[Hello World: Being Human in the Age of Algorithm, Hannah Fry]

Nowadays we often hear the word “algorithm” on the news and social networks. By the way, what is the algorithm? The “algorithm” is a (mathematical) recipe to accomplish a certain task. So, your grandma’s recipe for chicken soup is, in some ways, an established algorithm. But when we say about the algorithm recently, it usually refers to a computer algorithm, a series of computer languages to solve a certain problem. There are two different types of algorithms: (1) a rule-based algorithm that follows the prescribed details by humans and (2) a machine-learning algorithm that makes its own rule by machine (computer) itself.

Who does make it a rule for a new task in the future? Humans can make a crystal clear algorithm so that anybody can check the inherent bias or errors of the new rule. Machines, on the other hand, can make a high-performance algorithm without any prior knowledge and deep understanding of the new system. In the age of AI, the power of machine-learning algorithms is no way negligible and the use of this power in various fields is inevitable. However, “Great power comes great responsibility”. So, we, as humans, repeatedly scrutinize such black-box algorithms and prevent misuse of algorithms. We should always know that the final decision should come from humans because machines have no responsibility for their decision. Also, a human should provide some important rules to machine-learning algorithms such as consideration for others, tolerance, and sacrifice, which may lead to creating not only better performance algorithms but also impartial algorithms.