Machine Learning is a subset of the field of artificial intelligence (allowing machines to think intelligently). Essentially they find and apply patterns in data and according to MIT…run the world! With the increasing availability of vast amounts of data, machine learning utilise statistics to plough through and analyse these data sets to identify patterns that scientists and analytics experts can then utilize for actionable insights.
Netflix is one of the easiest examples of this. A popular application for machine learning is recommender systems. By analysing your interactions with Netflix, keeping track of the programs you watch the one’s you skip and don’t quite finish, the genre’s you most frequent etc this builds a pattern of behaviour the machine learning algorithm can then work upon to “recommend” and “predict” your future behaviour in terms of what you will most likely find enjoyable next.
Machine Learning got a jolt of power when English Canadian scientist Geoffrey Hinton invented deep learning. Through what is known as a neural network, it enhances the ability of machine learning to amplify its work on finding even the smallest of patterns. Neural networks model itself on the concept of the brain – the nodes being neurons and the network the brain. To get best results like anything else including the brain/body, they need to be “trained” on your particular dataset.
How you train your model, leads us into the various types of machine learning algorithms we can use. The most popular and the vast majority of data science jobs are focussed on supervised learning which is where we label our data in the set and tell the machine exactly what pattern to look for. So if you did an Amazon search for “tennis balls” you have told the search algorithm what to look for precisely in the database of products. It may find variations, big, small, yellow, green but it will focus on finding products labelled “tennis balls”. In unsupervised learning however, the machine is left to find any patterns it can find by itself with no labelling done on the data beforehand. A great application for this is cybersecurity, it will essentially not know what the fraudulent activity is before its committed but when it identifies a pattern it raises a flag.
Finally, the last major type of algorithm is known as reinforcement learning. In this instance the algorithm is given a clear goal “beat competitor at chess” and through trial and error (accompanied with a reward or punishment model depending on each trial) it learns how to achieve that goal. In a nutshell, those are the fundamentals of machine learning!