Fourth Industrial Revolution

You've heard about it, but do you understand? Everything you need to know about machine learning

An illustration picture shows a projection of binary code on a man holding a laptop computer, in an office in Warsaw June 24, 2013. REUTERS/Kacper Pempel (POLAND - Tags: BUSINESS TELECOMS TPX IMAGES OF THE DAY) - RTX10ZB5

Today, computers are better than humans in recognizing such patterns of handwriting. Image: REUTERS/Kacper Pempel

Mark Esposito
Chief Learning Officer, Nexus FrontierTech, Professor at Hult International Business School
Terence Tse
Executive Director, Nexus FrontierTech, Professor of Finance, Hult International Business School
Kariappa Bheemaiah
Associate research scientist , Cambridge Judge Business School
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Fourth Industrial Revolution?
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Innovation

Learning to learn

In 1959, Arthur Samuel, a pioneer in the field of machine learning (ML) defined it as the “field of study that gives computers the ability to learn without being explicitly programmed”.

ML can be understood as computational methods that use experience to improve performance or to make accurate predictions. In this case, experience refers to past information or data that is available to us, which has been labelled and categorized. As with any computational exercise, the quality and amount of the data will be crucial to the accuracy of the predictions that will be made.

Looking through this lens, ML seems to be a lot like statistical modelling. In statistical modelling, we collect data, verify that it is clean — in other words, that we have completed, corrected, or deleted any incomplete, incorrect, or irrelevant parts of the data — and then use this clean dataset to test hypotheses and make predictions and forecasts. The idea behind statistical modelling is the attempt to represent complex issues in relatively generalizable terms, which is to say, terms that explain most events studied. Effectively, we programme the algorithm to perform certain functions based on the data we submit. Put differently, the algorithm is static. It needs a programmer to tell it what to do when it is fed with data. So far, it makes indeed sense to use this approach when activated by the programmer.

But with ML, the procedure is flipped. Rather than preselecting a model and feeding data into it, in ML it is the data that determines which analytic technique should be selected to best perform the task at hand. In other words, the computer uses the data that it has to select and train the algorithm. Hence the algorithm is no longer static. It analyses the data to which it is exposed, makes a determination on the best course of action, and then acts. In essence, it “learns” from the data and in doing so, knowledge can be extracted from the data.

This method of learning is based on repetition. Remember that an algorithm is nothing more than a set of instructions that a computer uses to transform an input into a particular output. Thus in ML, the learning aspect is just an algorithm repeating its execution operation over and over again and making slight adjustments until a certain set of conditions is met. The litmus test of a learning algorithm is when it is able to predict when new data is given to it on which it had not previously been trained.

Evolution of ML

Data obviously plays a primary role in this methodological process. More importantly, it is the structure of the data that determines how the learning process will occur. It is here that we see the three-levels of ML:

Supervised ML

 Machine Learning supervisé.
Image: The Conversation

Here the computer is trained on data that is well labelled. This means that the data is already tagged with the correct label or the correct outcome. For example, if we were to teach a computer to distinguish between the picture of a cat and a dog, then we would tag the data in the following way:

This labelling process should be done by the programmer and having learned the difference, the ML algorithm can now classify new information that is given to it and determine if the new image is that of a dog or a cat.

Based on this simplistic method, supervised ML can be used to perform much more complicated operations. One use is understanding how to read figures and alphabets. The way one person writes the number “1” or the letter “A” will not be the same way as another person does.

Motifs manuscrits de chiffre 1.Illustration fournie par l'auteurMotifs manuscrits de la lettre A.Illustration fournie par l'auteur

By feeding the computer with vast amounts of labelled examples of the number “1” or “A”, we can train the algorithm to see the various versions these figures. The computer thus begins to learn the variations and becomes increasingly competent at understanding these patterns.

Today, computers are better than humans in recognizing such patterns of handwriting. The larger the dataset, the better trained the algorithm. Once trained, the algorithm is given new data and uses its past experience to predict an outcome.

Image: The Conversation
Image: The Conversation

Unsupervised ML

This is where the algorithm is trained using a dataset that does not have any labels. The algorithm is not told what the data represents. In this case, the learning process is dependent on the identification of patterns that are repeatedly created in the data. Using the cat and dog example, the algorithm begins to separate the images it receives based on the inherent characteristics of dogs.

In unsupervised learning, the algorithms must use methods of estimation based on inferential statistics to discover patterns, relationships and correlations with the raw, unlabelled dataset. As patterns are identified, the algorithm uses statistics to identify boundaries within the dataset. Data with similar patterns are grouped together, creating subsets of data. As the classification process continues, the algorith begins to understand the dataset it is analysing, allowing it to predict the categorization of future data.

This clustering of data can automate decision making, adding a layer of sophistication to unsupervised learning. More importantly, it allows us to leverage data in a new way. What we lack in knowledge we make up for in data.

Reinforcement ML

Reinforcement learning is like unsupervised ML in that the training data is also unlabelled. However, when asked a question about the data, the outcome is graded – so there is still a level of supervision. The algorithm is presented with data that lacks labels, but is given an example with a positive or negative result. This positive or negative grade provides a feedback loop to the algorithm allowing it to determine if the solution it is providing is solving a problem or not. Effectively, it is the computerised version of human trial and error learning.

Reinforcement ML is often used to make strategies. As decisions lead to consequences, the output action is prescriptive, and not just descriptive, as in unsupervised learning. This kind of learning has been used to train computers how to play games. This is the idea behind the company DeepMind, acquired by Google in 2014, which trained which they trained their algorithm to learn how to play Atari.

Loading...

It went on to create AlphaGo which defeated the best human professional Go player 4-1 in the game of Go, one of the most complex games in the world.

Implications for businesses

Today machine learning is being used in a number of areas. Google’s self-driving car was developed using machine learning and today machines can lip read faster than humans. ML has been infiltrating almost every sector of finance in recent years. For instance, ML is being used for algorithmic trading, analysing time series data, portfolio management, fraud detection, customer service, news analysis, investment strategy construction, etc.

But the real power of machine learning is unleashed with neural networks. In the next post, we will discuss a bit more about it.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Fourth Industrial RevolutionEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Future of the internet: Why we need convergence and governance for sustained growth

Thomas Beckley and Ross Genovese

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum