New technologies are playing a fundamental role in directing where businesses are going and, to some extent, in determining how the future is shaping up. As part of our recent work, my fellow researchers and I realised that many people do not have a good idea of how technology will change our lives, yet are eager to know more. In this series of articles, we will explain in plain language four technological advancements: machine learning, neural networks, natural-language processing and affective computing, and artificial intelligence.
Learning to learn
In 1959, Arthur Samuel, a pioneer in the field of machine learning (ML) defined it as the “field of study that gives computers the ability to learn without being explicitly programmed”.
ML can be understood as computational methods that use experience to improve performance or to make accurate predictions. In this case, experience refers to past information or data that is available to us, which has been labelled and categorized. As with any computational exercise, the quality and amount of the data will be crucial to the accuracy of the predictions that will be made.
Looking through this lens, ML seems to be a lot like statistical modelling. In statistical modelling, we collect data, verify that it is clean — in other words, that we have completed, corrected, or deleted any incomplete, incorrect, or irrelevant parts of the data — and then use this clean dataset to test hypotheses and make predictions and forecasts. The idea behind statistical modelling is the attempt to represent complex issues in relatively generalizable terms, which is to say, terms that explain most events studied. Effectively, we programme the algorithm to perform certain functions based on the data we submit. Put differently, the algorithm is static. It needs a programmer to tell it what to do when it is fed with data. So far, it makes indeed sense to use this approach when activated by the programmer.
But with ML, the procedure is flipped. Rather than preselecting a model and feeding data into it, in ML it is the data that determines which analytic technique should be selected to best perform the task at hand. In other words, the computer uses the data that it has to select and train the algorithm. Hence the algorithm is no longer static. It analyses the data to which it is exposed, makes a determination on the best course of action, and then acts. In essence, it “learns” from the data and in doing so, knowledge can be extracted from the data.
This method of learning is based on repetition. Remember that an algorithm is nothing more than a set of instructions that a computer uses to transform an input into a particular output. Thus in ML, the learning aspect is just an algorithm repeating its execution operation over and over again and making slight adjustments until a certain set of conditions is met. The litmus test of a learning algorithm is when it is able to predict when new data is given to it on which it had not previously been trained.
Evolution of ML
Data obviously plays a primary role in this methodological process. More importantly, it is the structure of the data that determines how the learning process will occur. It is here that we see the three-levels of ML:
Here the computer is trained on data that is well labelled. This means that the data is already tagged with the correct label or the correct outcome. For example, if we were to teach a computer to distinguish between the picture of a cat and a dog, then we would tag the data in the following way:
This labelling process should be done by the programmer and having learned the difference, the ML algorithm can now classify new information that is given to it and determine if the new image is that of a dog or a cat.
Based on this simplistic method, supervised ML can be used to perform much more complicated operations. One use is understanding how to read figures and alphabets. The way one person writes the number “1” or the letter “A” will not be the same way as another person does.
By feeding the computer with vast amounts of labelled examples of the number “1” or “A”, we can train the algorithm to see the various versions these figures. The computer thus begins to learn the variations and becomes increasingly competent at understanding these patterns.
Today, computers are better than humans in recognizing such patterns of handwriting. The larger the dataset, the better trained the algorithm. Once trained, the algorithm is given new data and uses its past experience to predict an outcome.
This is where the algorithm is trained using a dataset that does not have any labels. The algorithm is not told what the data represents. In this case, the learning process is dependent on the identification of patterns that are repeatedly created in the data. Using the cat and dog example, the algorithm begins to separate the images it receives based on the inherent characteristics of dogs.
In unsupervised learning, the algorithms must use methods of estimation based on inferential statistics to discover patterns, relationships and correlations with the raw, unlabelled dataset. As patterns are identified, the algorithm uses statistics to identify boundaries within the dataset. Data with similar patterns are grouped together, creating subsets of data. As the classification process continues, the algorith begins to understand the dataset it is analysing, allowing it to predict the categorization of future data.
This clustering of data can automate decision making, adding a layer of sophistication to unsupervised learning. More importantly, it allows us to leverage data in a new way. What we lack in knowledge we make up for in data.
Reinforcement learning is like unsupervised ML in that the training data is also unlabelled. However, when asked a question about the data, the outcome is graded – so there is still a level of supervision. The algorithm is presented with data that lacks labels, but is given an example with a positive or negative result. This positive or negative grade provides a feedback loop to the algorithm allowing it to determine if the solution it is providing is solving a problem or not. Effectively, it is the computerised version of human trial and error learning.
Reinforcement ML is often used to make strategies. As decisions lead to consequences, the output action is prescriptive, and not just descriptive, as in unsupervised learning. This kind of learning has been used to train computers how to play games. This is the idea behind the company DeepMind, acquired by Google in 2014, which trained which they trained their algorithm to learn how to play Atari.
Implications for businesses
Today machine learning is being used in a number of areas. Google’s self-driving car was developed using machine learning and today machines can lip read faster than humans. ML has been infiltrating almost every sector of finance in recent years. For instance, ML is being used for algorithmic trading, analysing time series data, portfolio management, fraud detection, customer service, news analysis, investment strategy construction, etc.
But the real power of machine learning is unleashed with neural networks. In the next post, we will discuss a bit more about it.
The research of this article is sponsored by KPMG/ESCP Europe Chair Governance, Strategy, Risks, and Performance. Terence Tse and Mark Esposito are the authors of “Understanding How the Future Unfolds: Using DRIVE to Harness the Power of Today’s Megatrends”. Kary Bheemaiah is the author of “The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory”.