From What I Read: Machine Learning

Let’s be up front here: my introduction section for the Artificial Intelligence post stole quite a lot of limelight from the remaining posts within the AI series (since this post is the subset of AI, and the next post is possibly the subset of this post), so I will not bother to think too much about coming up with an introduction with a “bang”.

The other disclaimer here being that this post was not how I envisioned a month ago. This is mainly because as I search out on the topic further, there are more and deeper ways to explain the topic (and did I say varied too?). And this extends beyond reading materials – there are several kinds of videos out there which aims to explain the subject (in varied edutainment levels).

But in the interest of time, and effort, I will try to be rather layman-ish and bare-bone-ish in the approach in handling the subject of Machine Learning. But I will include links to resources I find interesting (and may not end up being used for this post) at the end of this post.

My readings for this post in from Bernard Marr on Forbes, MATLAB & Simulink, Experts System, Yufeng Guo on Towards Data Science, Danny Sullivan on MarTech Today, SAS and several Quora replies.

What is the subject about?

So what is machine learning (ML)?

It is rather widely acknowledged that ML is a subset of Artificial Intelligence (AI), and so, from a concept level, it would bear similarities to the goals of AI: to mimic humans’ intelligence as machines. On a subset level, as Marr mentioned in his article, ML seeks to “teach computers to learn in the same way we do” through interpreting information, classifying them, and learn from successes and failures.

Such a description is concurred by an article from MATLAB & Simulink (M&S), which stated that ML is a “data analytics technique that teaches computers to…learn from experience”, even adding that this learning method “comes naturally to humans and animals”.

So what does “learning from experience” and “learning from successes and failures” underline? They imply the absence of explicit programming from a programmer, as Experts System’s (ES) article explained, and further added the idea of automation in the learning process.

Guo took a different approach by defining ML as “using data to answer questions”, outlining the idea of training from an input (“data”) and the outcome of making predictions or inferences (“answer questions”). Guo further mentioned that the two parts in the definition is connected by analytical models, in which SAS’ article also highlighted.

To conclude this section, we can connect the two approaches of defining ML, sloppily amalgamate as “a data analytics technique that teaches computers to learn automatically through experiences by using data, ultimately to answer questions through inferences and predictions”.

How does it work?

In explaining how ML works, many of the articles in review would mention the two types of techniques under ML, namely supervised learning and unsupervised learning.

As M&S’ article puts it, supervised learning develops predictive models based on both input and output data to predict future outputs. Such examples of application include handwriting recognition (which leverages on classification techniques like discriminant analysis and logistic regression) and electricity load forecasting (which uses regression techniques like linear and nonlinear model and stepwise regression).

Unsupervised learning seeks to find hidden patterns or intrinsic structures in input data through grouping and interpreting the data – there is no output data involved. This type of technique is usually used for exploratory data analysis, and would see applications in object recognition, gene sequence analysis and market research. The M&S’ article cited the clustering technique as the most common unsupervised learning technique, which uses algorithms the likes of hierarchical clustering and hidden Markov models. In short, unsupervised learning is good for splitting data into clusters.

ES added several dimensions to the types of techniques and ML algorithms to shed more light on how ML works, namely on semi-supervised ML algorithms (falling between supervised and unsupervised learning which uses labeled (input data with accompanying output data) and unlabeled data for training to improve learning accuracy) and reinforcement ML algorithms (interacting with the environment by producing actions and discovering errors or rewards to determine the ideal and optimised behavior within a specific context).


Sullivan’s article mentioned about the three major parts that makes up ML systems, namely the model (the system that makes predictions/identifications), the parameters (the factors used by the model to produce decisions) and the learner (the system that adjusts the parameters and subsequently the model by looking at differences in predictions versus actual outcome.

Such a way to explain the workings of ML systems bears similarity to how CGP Grey explains in his video which I find rather interesting.

Guo outlined 7 steps of ML in his separate article:

  1. Data gathering
  2. Data preparation
  3. Model selection
  4. Model training
  5. Model evaluation
  6. (Hyper)Parameter Tuning
  7. Model prediction

Most of the steps are pretty much similar to how Sullivan’s article described and implied, including the step of training, in which Sullivan described as the “learning part of machine learning” and “rinse and repeat” – these process would reshape the model to refine the predictions.

Again, this is not a technical post, so I would spare you with too much details. I would, however, include links to a few videos for you to watch should you be interested.

And finally, on Quora, there is also a response to break down how ML works on a very relatable manner – that machines are trying to do how we are doing tasks, but with infinite memory and speed of handling millions of transactions every second.

How does it impact (in a good way)?

Many of us would have been experiencing the applications of ML unknowingly everyday. Take YouTube’s Video Recommendations system, which relies on algorithms and the input data – your search and watch history. The model is further refined with other inputs such as the “Not interested” button you clicked on some of their recommendations, and percentage of video watched (perhaps).

And speaking of recommendations, how can we not include the all-too-famous Google search engine and its results recommendations? And speaking of Google, how can we not bring to mind their Google Translate feature which allows users to translate languages through visual input?

So certainly, the use case for ML is quite prevalent in these areas that the public-at-large is familiar of.

M&S outlined several other areas where ML has become a key technique to solve problems, such as credit scoring in assessing credit-worthiness of borrowers, motion and object detection for automated vehicles, tumour detection and drug discovery in the field of biology, and predictive maintenance for manufacturing.

SAS’ article highlighted that ML could enable for faster and more complex data analysis with better accuracy, while also being able to process large amount of data coming from data mining and affordable data storage.

And when ML is able to do certain tasks which would have required humans to do in the past, that would mean cost savings for the enterprises involved. This though provided a nice segue way to the next section.

What are the issues?

Now, call me lazy if you want, but as I have mentioned earlier: since ML is a subset of AI, there are several issues AI faced that would be faced by ML, such as the problem of input data quality (both accuracy and biaslessness), and the difficulty in explaining how the model may reached to its conclusion (especially if it involved deploying neural network technique).

To also reiterate from the previous post, we may foresee jobs being displaced as tasks can be increasingly automated and taken over more efficiently by ML systems. That being said, the half-glass-full view of the situation is that the job functions have been augmented and changed – if we could get the workforce to adapt to these job functions, the impact could be minimised.

As ML becomes widely adopted, there would be a greater demand of skilled resources. This sounded like a solution to the half-glass-full view mentioned, but seeing that the field of ML technology is still relatively new, it would probably mean higher cost and difficulty in acquiring expertise in ML, let alone to train the existing workforce.

And as ML become increasingly and widely used, the hunger for data would become more insatiable. We as a society may increasingly find ourselves to address the question on how much personal data should we be sharing as Doromal writes in his Quora reply to a question.

But to get to wide adoption, there is a need for the democratisation of ML, since presently investments in ML can be hefty, and hence the exclusivity of ML whereby more advanced systems would be available to users that could afford.

How do we respond?

My answer to this question would not run far from what was posed in the post on AI. But to add on to that, as I have mentioned in the earlier section, we as a society would need to take a hard look at how do we perceive data privacy, since ML is dependent on the availability of data to form better predictions and inferences.

There is growing interest among companies on ML upon seeing the benefits it can reap. Perhaps through the high demand that there will be a greater push in the development and subsequent democratisation of the technology. That said, companies need to find the balance between deployment of ML and managing their workforce which may be increasingly redundant.

The teaching and learning of ML should become more widespread to meet the increased need of such skilled workforce, while a better level of awareness about ML among individuals of the society would also be needed in the future to come in order to understand how certain automated decisions they will face are derived from.

ML is unlike the other topics mentioned in this blog, in that the technology is already here and now today, up and running (while things like ICO and even the commercial use of blockchain is still yet to be seen). And as implied and mentioned, the applications have already become rather prevalent today. Individuals in the society however are still probably some way off from having a good understanding about ML, but that would probably be changed soon as widespread automation increasingly creeps and looms on the horizon.

Interesting video resources

How Machines Learn – CGP Grey:

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34 – CrashCourse:

What is Machine Learning? – Google Cloud Platform:

But what *is* a Neural Network? | Chapter 1, deep learning – 3Blue1Brown:


What Is Machine Learning – A Complete Beginner’s Guide In 2017 – Forbes:

What Is Machine Learning? | How It Works, Techniques & Applications – MATLAB & Simulink:

What is Machine Learning? A definition – Expert System:

How Machine Learning Works, As Explained By Google – MarTech Today:

How do you explain Machine Learning and Data Mining to non Computer Science people? – Quora:

Machine Learning: What it is and why it matters | SAS:

What is Machine Learning? – Towards Data Science:

The 7 Steps of Machine Learning – Towards Data Science:

5 Common Machine Learning Problems & How to Beat Them – Provintl:

What are the main problems faced by machine learning engineers at Google? – Quora:

An Honest Guide to Machine Learning: Part One – Axiom Zen Team – Medium:

These are three of the biggest problems facing today’s AI – The Verge:

2 thoughts on “From What I Read: Machine Learning

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s