Recently, we’ve witnessed an abundant amount of news articles covering the term ‘Artificial Intelligence’ – or AI for short. Specifically, those stories make you frown, as they somehow hint that AI has made a giant leap forward since some years ago. And although that’s true in a sense, those articles also claim that AI will overtake us entirely. But is that the case? What is the impact of AI today? And how can you successfully employ AI techniques without being caught by the hype train? In this blog, we’ll take a look into today’s AI, specifically machine learning and – one level lower – deep learning. Our blog will perhaps start off somewhat philosophically, but don’t worry – we’ll quickly move onto the more pragmatic parts.
In our field of work – creating new business opportunities with new technologies, specifically AI – we meet two entirely opposing viewpoints towards the development of intelligent machines. One viewpoint, which is called the singularity movement, claims that something called singularity – the emergence of a superintelligent system, which means that the system is smarter than man – will occur one day. The movement, which has emerged in the 90s but has been given a boost by Ray Kurzweil since 2005, believes that such a system ensures that all the world’s problems are resolved rather swiftly. Hunger disappears, effective treatments are found for various diseases, and so on. That would be a perfect world.
But at the other end of the spectrum we find Elon Musk and the viewpoint that is directly perpendicular to the one proposed by the singularity movement. Musk and others reason rationally about AI: an AI system has a goal that is defined internally and will always attempt to maximize its ‘win’ (i.e., how well it attains the goal) and minimize its ‘loss’. In the case of superintelligent systems, such rationalism could result in the extermination of human life – and they let it know. According to this group of people, superintelligent systems have an advantage over human beings; by consequence, they can make themselves smarter every time, and will directly respond negatively towards humans who try to turn it off. Oops.
But well… although those are very interesting experiments of philosophy, and although we must take them seriously, we’re living in a real world. A world with serious challenges, such as migration out of fear for prosecution and climate change with continuously increasing draught in various areas of the world. Pragmatically, we should therefore perhaps rephrase the question – can we use AI to solve those challenges?
AI offers opportunities if you know its limitations
The answer to this question is yes. The largest impact of today’s AI systems is that they can recognize patterns hidden in large sets of data, invisible to the human eye. To speak in terms of the challenges mentioned before, we can for example use the case of migration. In the past years, many refugees have crossed the Mediterranean in little boats, hoping for a better life. This often results in highly dangerous situations, because such boats are often crammed and weather can be very bad. But well, patrolling such vast areas with humans is a no-go… what to do?
Ordinarily, AI can here be of help. The sea is a relatively variance-free object: often, it’s simply a blue area. In that case, it is very well possible to train an AI model that attempts to recognize anomalies in recent satellite imagery. That is, objects which are crossing the ocean at that point in time. Obviously, those detections can be real ships, but also those little boats. Specifically, by showing a model many examples of such boats, it can be used for the better.
But AI cannot be successfully applied everywhere. And that’s what we’re slightly forgetting these days. Today’s flavor of AI, machine learning and – one level lower – deep learning with deep neural networks, comes with fundamental shortcomings. Last year, New York University professor Gary Marcus, laid out those shortcomings in his work ‘Deep Learning: A Critical Appraisal’. We’ll cover the limitations he mentions next.
How deep is deep? The most complex neural networks, which attempt to mimic the human brain, contain millions and millions fewer neurons than the average human brain. Additionally, the human brain is wired to allow all the individual parts to work together. In a deep learning model, no such structure exist. The first question we must thus ask ourselves is: how deep is deep?
What is learning? Human beings learn by creating a collection of rules of logic of all the objects around them. We know how to recognize an average human using those rules: an average human has a head, arms, legs, and so on. We also call this deductive reasoning, and it works in a top-down fashion: from a general picture of a human, we can derive that I’m a human being (at least, that’s what I hope). A deep learning model, on the other hand, works using a bottom-up fashion: it works inductively. Based on a large amount of examples, it attempts to recognize what a human being looks like. A deep learning model thus learns fundamentally differently than a real human being. The result is a massive hunger for data. Whereas human beings can learn that we’re talking about humans based on one example, machine learning and deep learning often require thousands of examples to learn this properly. Oops.
Next up – this one:
Asking deep learning models open questions is impossible. Real people are capable of at least trying to answer open questions if they get them. Machine learning algorithms cannot do that. They are trained very narrowly for a small set of outcomes. In the case of the video above, they are ‘hotdog’ and ‘not hotdog’. That’s funny, of course – but it really works that way.
Transferring knowledge is difficult. Suppose that you have uploaded thousands of pictures of a white ball. A machine learning model that you link to a webcam can now probably recognize perfectly when a white ball enters the picture. But what if you add a red one? The odds are that it is not recognized as being a ball. Whereas deductive reasoning would allow human beings to recognize a ball regardless of color, the machine learning approach with induction doesn’t work that way. Obviously, we can perform some magic tricks which resolve this problem – converting all images into greyscale first – but you get the point 😊
Other problems. The degree of transparency of a machine learning model remains a challenge. Although many small breakthroughs are made, the how behind the learning process remains a black box way too often. Integrating machine learning models with existing knowledge is also difficult. Often, you would need to restart training from scratch. And what about the rule ‘correlation means no casation’? That is: you can find such high correlation, but that does not mean that ‘A means B’ is valid. Although they remain problematic, research is performed into this, among others at the University of Twente.
And so on. It is precisely the reason why AI systems cannot yet handle creative, strategic and empathic work. Those jobs require that humans integrate knowledge from different sources, that they can reuse and thus transfer knowledge from one scenario into the other, can explain why they do certain things and need to be capable of answering open questions. And well, machine learning systems simply cannot do that yet. But redundant and repetitive work? They are highly effective.
Common sense is the key to success
‘Have you studied for years, now common sense starts getting popular again’ – it’s a rough translation of one of the sayings of Loesje, a rather famous Dutch sketch collective. In our experience, however, common sense is indeed the key to success with AI projects, specifically deep learning. We therefore work with those focus points:
- We try not to follow the hype. Deep learning is on top of the 2018 Gartner Hype Cycle. This means that many promising stories are brought out about how organizations disruptively transform themselves with AI, specifically deep learning. But how spectacular are those applications? And do you actually need AI for them? Often, the answer is no. Let’s try and see through the fanciness of those applications and get to the core of the initial problem.
- Tell customers of AI does not benefit them. It happened once that a customer visited de Gasfabriek, the innovation center we’re located at, wanting an augmented reality (AR) solution, and leaving with an AI project. Obviously, it works the same way the other way around. When our estimate is that AI does not help a customer at all, we won’t refrain from telling them.
- Successfully applying AI means getting your data in shape. Often, we see that companies have difficulties with their data landscape. Those difficulties emerge from the fact that those landscapes were not created with the goal of applying AI in the first place. A first step, then, is to ensure the presence of a large training set. We prefer to create a solution which creates such training sets automatically, allowing human beings to label them, providing the necessary context. Only then, you’re getting somewhere.
In short: we’re using common sense to determine where AI does and where AI does not provide relevance in the first place. Only then, we’ll start building software. Are you interested in getting a fresh look at your business and whether it can benefit from common sense-based AI? Please be sure to get in touch. De Gasfabriek, close to the A1 highway in Deventer, serves delicious coffee.
See you soon!
- Date - 28 May 2019