If you are even a bit technology savvy you’d probably run into at least 1-2 posts about AI in your feeds. There’s a plethora of articles out there exploring the multitude of advantages that AI could bring, branching into two rather extreme theories: one of them is that in the near future humanity might need to adopt a universal basic income as we will not be needed for work anymore and AI will do everything for us; the other one considers that the future of humanity is doomed as soon as the AI will suddenly become conscious, realize that it doesn’t need humans and will take over the planet. [1] [2] [3] I must admit that there is a bit of plausibility in both of the theories, but any kind of analysis should be done taking into account facts and figures and not fairy tales, SF movies or undocumented opinions. But, before we jump to conclusions let’s have a look at the current AI status, maybe we manage to have a glimpse into what the future might look like.
What we nowadays call artificial intelligence represents a broad discipline that pursuits the objective of creating an autonomous form of intelligence within machines (computers). Several important terms must be defined so that we get a clear understanding of what exactly we are referring to when discussing about AI.
Machine learning (ML) represents the ability of machines to learn to perform different tasks or solve problems. Machine learning (ML) algorithms are just more advanced algorithms that apply different learning methods (ex: statistical analysis) on sets of data. These are algorithms that know how to learn by themselves. In most of the cases when you hear people talking about AI, they actually refer to machine learning.
Artificial general intelligence (AGI) the intelligence of a machine that can actually act as a human,i.e. solve complex and variatef problems and experience consciousness (also known as strong AI), although this represents a goal for everybody , at this point in time, it is only wishful thinking and you will only find such scenarios in SCi-Fi movies (Btw TAU provides good representation of this concept in a rather middling movie). You will probably find different variations on these definitions,
Machine learning has become a science by itself that has borrowed different concepts from other disciplines such as statistics, neurology, computer science, biology, genetics etc. In time, different approaches have been developed to help machines do the learning by themselves. This Wikipedia page provides a good introductory overview of available methods, and for more advanced explanations check out The Master Algorithm by Pedro Domingos. In his view, there are 5 big categories of approaches:
- “symbolists” have developed learning algorithms based on the use of symbols and inverse deduction that figures out missing data in order to make a deduction,
- “connectionists” follow the brain functioning model by making decisions based on the strength of the connections (as in the strength of the synapse) between decisions through an algorithm called backpropagation,
- “evolutionaries” use genetic programming which evolves computer programs by copying natures models of mating and evolution,
- “bayesians” use probabilistic inference and the famous Bayes’ theorem and its derivatives,
- “analogizers” work by analogy, recognizing similarities between decisions (e.g. patients having the same symptoms).
Each of these approaches has been proven to function properly when faced with certain types of problems, but none of them can be defined as what Pedro Domingos calls “the master algorithm”, an algorithm that can solve any type of problem. Nevertheless, he thinks we are close to invent a master algorithm, maybe combined from different other algorithms.
Therefore, one thing to keep in mind is that none of the existing machine learning algorithms can be used for solving all kinds of problems. You need a specific algorithm for a specific type of problem. Fig. 3 provides a summary of the generally used machine learning algorithms and their practical application (Fig. 3 credits).
Another important aspect of how these ML algorithms work is that they need a huge amount of data to learn from. Basically, this is one of the biggest challenges of ML nowadays. This can also maybe explain the huge data collection heist happening online nowadays, as Marketing/PR is one of the big industries taking advantage of the ML progress. ML learning rules require that the algorithm should be exposed to huge amounts of data in order for the biases to be rendered insignificant . You may actually notice this need on a daily basis, as many online stores track you in order to give you personalized offers. Amazon or Netflix will never be able to determine your real tastes and provide an accurate personalized offer just by taking into account 2-3 movies or books. Maybe you had a bad week/month and had been reading/watching stuff to cheer you up, or maybe you had some homework to do and thus read something that’s not your usual taste. ML algorithms need huge amounts of data to get as close to reality as possible, especially in today’s world, where we are influenced by so many factors. Moreover, given the new privacy super-awareness trend that is continuously growing worldwide (see GDPR), data collection and profiling is becoming more and more difficult.
Nevertheless, ML is all around us. If you want a glimpse of it just have a short talk with your mobile digital assistant (Siri, Alexa, Cortana etc.). You will see how helpful they can be in some simple situations, but also how misleading they can become in others. If you need a short reminder on how bad things can go with AI, just have a look here and here. There is nothing more creepy than a biased ML algorithm running on low data. Maybe getting a less useful list of movies or books would not produce such a negative outcome at social level, but think of the scenario where governments rely on AI/ML to assist them in decision making (e.g. building national health public policies that may affect millions of people).
A Harvard Kennedy School report written by Hila Mehr concludes that “AI is not a solution to government problems, it is one powerful tool to increase government efficiency” and that “despite the clear opportunities, AI will not solve systemic problems in government […]”. The author’s research indicates that there are a lot of AI initiatives in the governmental sector but mostly they fall into 5 types of categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. Therefore, it could be said that AI might improve efficiency, but it is still not ready to inform nor to take impactful decisions. Indeed, it may take some good time until it will reach the desired level.
Another AI related concern is about the huge loss of jobs as many of them will be replaced by algorithms and robots. The whole saga of job loss started in 2013 with the paper called “THE FUTURE OF EMPLOYMENT: HOW SUSCEPTIBLE ARE JOBS TO COMPUTERISATION?” by Carl Benedikt Frey and Michael A. Osborne from Oxford University. According to them “around 47 percent of total US employment is in the high risk category” and “as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence”. Also according to them this change should happen in an “unspecified number of years”, so they did not know when.
Recently, the Organisation for Economic Co-operation and Development (OECD) published another report, concluding that “14% of jobs in OECD countries […] are at high risk (probability of over 70%) of being automated based on current technological possibilities”. An additional 32% of jobs could face significant changes due to automation. Well, if you asked me, the results of the two studies are quite different. Another important OECD finding is that “the risk of automation declines with the level of education” and “the risk peaks among teen jobs”. Although I couldn’t find any official statistics, the same thing probably happened in the previous three industrial revolutions (AI belonging to the forth) 16th to 19th centuries. While the steam engine and the telephone were introduced probably a lot of people lost their jobs, but new ones were created. Another point of view from a reputable research and advisory company states in a report that “artificial intelligence will create more jobs than it eliminates”.
I tend to agree with most of the findings above, and I might conclude that there is no black or white conclusion. Caution should be exercised before jumping to conclusions. AI will kill some jobs but will also create others. The only issue is how you play with these outcomes in terms of public policies. Re-qualification will play a major role in the near future and governments better be prepared for that (see the concept of “flexicurity”).
Last but not least, there is also the the scenario where AI will suddenly become conscious, decide that humans are a low level form of intelligence and take over the planet and eventually destroy us. This doomsday scenario can be found in multiple movies and IMO has nothing to do with reality. It’s pure fiction. Among the things that differentiates us from machine there is our consciousness, meaning our state of awareness through which we understand what and why something is happening to us. Reasoning combined with consciousness gives you the great opportunity to define yourself as an entity, establish and pursue your own goals, more or less it helps you define the meaning of your life (which is not 42 BTW). Up to now, many scientists have struggled to identify how exactly consciousness and reasoning are formed within our brains, but as far as I know we are still at early stages. In this respect, I found it difficult to believe that a bunch of metal and silicon exposed to electricity will suddenly become conscious, which might eventually develop feelings and reason and decide upon the fate of humanity. The only reasonable scenario that I can think of is having a very efficient generalized AI specifically programmed to destroy humanity. But still, for such a scenario to succeed, a lot of prerequisites must be accomplished. Technically, we’re not at that level, yet! Equally true is that in its history, humanity has always tried to weaponize every technological breakthrough. Caution should be used in regulating AI to prevent it from being developed in the wrong direction.
AI is a great achievement for humanity and it should be treated like that. Many technological innovations have produced disruptions up to know, but we managed to “survive” and take advantage of them all and continue our progress. Since the first industrial revolution technology has contributed exponentially to the growth of humanity and we are nowadays living times that are more prosperous, free and enjoyable that any other time in history. AI is just an advanced technology and we should treat it as such. We have seen such disruptions before, we have the knowledge on how to handle them, we just need the willpower.
Hope you enjoyed the read!
More interesting resources about AI:
-
- The incredible inventions of intuitive AI – https://www.youtube.com/watch?v=aR5N2Jl8k14&feature=youtu.be
- EU Law Rules on Robotics & A.I. in the Making – https://www.linkedin.com/pulse/eu-law-rules-robotics-ai-making-update-22018-paul-voorn/?lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BRCEVy9haTPuHR4IwFVC3Jg%3D%3D
- Don’t fear AI – http://www.eib.org/en/essays/artificial-intelligence?cid=dis_facebook_Blog-ProjectStory_2018-07-06-01_en_na_na_Innovation