From What I Read: Artificial Intelligence

“Doesn’t look like anything to me.”

If you get the reference of that quote, let’s do a virtual high-five, for I have found another fellow fan of the HBO series, Westworld.

When talking about Artificial Intelligence, or AI, it is often too easy for the layman to think of human-like robots, like those from Westworld. While it is true that robots with high cognitive function do operate on AI, robots are far from being representative of what AI is.

So, what is AI? And like how my other posts go, we will explore how AI works, AI’s impact and issues, and how we should respond.

My readings for this article is mainly from SAS, McKinsey, Nick Heath on ZDNet, Bernard Marr, Erik Brynjolfsson and Andrew McAfee on Harvard Business Review, and Tom Taulli on Forbes.

What is the subject about?

The definition of AI seems to be rather fluid, as some of the articles pointed out. But one thing is for sure: the phrase was first coined by Minsky and McCarthy in their Darthmouth College summer conference paper in 1956. Heath summarised the idea by Minsky and McCarthy on AI as “any task performed by a program or a machine that, if a human carried out the same activity … the human had to apply intelligence to accomplish the task”. (Click here if you are interested in the proposal paper.)

Such broadness of the initial definition for AI unfortunately meant the debate on what constitutes as AI would be far and wide.

Subsequent definitions did not do much in refining the original definition further. McKinsey referred AI to the ability of machines in exhibiting human-like intelligence, while Marr perceived AI as “simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn – using the digital, binary logic of computers”.

In short, machines emulating human in intelligence.

However, a comment under Heath’s article shed some interesting light on the understanding of AI.


How does it work?

Reiterating from the comment, “AI is a complex set of case statements run on a massive database. These case statements can update based on user feedback on how valid the results are”.

Such definition would not be too far off from a technical definition of AI. Andrew Roell, a managing partner at Analytics Ventures, was quoted in Taulli’s article in describing AI as computers being fed with algorithms to process data leading to certain desired outcomes.

Obviously, two components are required for AI to work: algorithm and data. However, what makes AI different from an ordinary piece of software is the component of learning. SAS described AI’s working as “combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data”.

Of course, the intricacies of AI is wide considering the various subfields within, such as Machine Learning, Deep Learning, Cognitive Computing and Natural Language Processing. Some of these topics would be discussed in future posts. But for now, it is suffice to say that these methods analyse a variety of data to achieve a certain goal.

There is also another way to categorise the research and development work in AI. Heath and Marr pointed out that there are two main branches of AI. Narrow AI, or as Marr put it, “applied/specialised AI” would be more common to ordinary people like us through its widespread application (think Apple’s Siri and the Internet of Things), since the AI simulates human thought to carry out specific tasks (having been learned or taught without being explicitly programmed).

The other branch of AI is “general AI”, or artificial general intelligence (AGI). General AI seeks to carry out a full simulation of the adaptable intellect found in humans – that is, being capable of learning how to carry out various vastly different tasks and to reason on wide-ranging topics based on accumulated experience, as Heath and Marr pointed out. Such intelligence requires a high amount of processing power to match that of human’s cognitive performance, and AGI being a reality is rather a story in the distant future. Others however would argue that given the evolution of processing technology, supported by further development in the integration between multiple different Narrow AI, AGI may not be too far away, as indicated by IBM’s Watson.

How does it impact (in a good way)?

Even though AI sounded like a buzzword in recent times, the applications can be traced to quite a while back. As an example, the Roomba vacuum (that circular robot vacuum cleaner that whizzed across the room) is an application of AI that leveraged on sensors and sufficient intelligence to carry out the specific task of cleaning a home – this was first conceived in 2002, 16 years ago (from point of writing). 5 years earlier, IBM’s Deep Blue machine defeated world chess champion Garry Kasparov.

As mentioned earlier, the application of narrow AI is widespread, since the scope here is to carry out specific tasks. Heath pointed out several use-cases such as interpreting video feeds from drones carrying out visual inspections of infrastructure, organising calendars, chatbots to respond to simple queries from customers and assisting radiologists to spot potential tumors in X-ray. Brynjolfsson and McAfee on the other hand highlighted the advances in voice recognition (Siri, Alexa, Google Assistant) and image recognition (think about Facebook recognising your friend’s faces from your photos in suggesting to tag).

If you notice, I have left out some cognition part of the application, which I shall reserve for the Machine Learning article (and other articles) in the future.

In the world of business, AI may help businesses to deliver enhanced customer experience by customising offerings based on data of customer preference and behaviour, as indicated by McKinsey. AI may also help to provide smarter research and development through better error detection, and provide forecasting of supply and demand to optimise production, in manufacturing.

What are the issues?

Going back to the core components of AI, you will see that one of the main dependency of AI is data. It goes without mention then, that quality data produces quality AI, and inaccuracies in the data will be reflected accordingly in the results.

The other issue that current AI systems face is that many of them falls under the narrow AI category, which could only carry out  specialised and clearly defined tasks. SAS pointed out the example of an AI system that detects healthcare fraud cannot also detect tax fraud or warranty claims fraud. The AI system is dependent on the defined task and scope that it was trained.

Brynjolfsson and McAfee’s article identified three risks brought about by the difficulty in humans understanding how AI systems reached to certain decisions, given that advanced AI systems like deep neural networks have a complex decision making process, and that they could not articulate the rationale behind those decisions even when they gather a wealth of knowledge and data. The three risks are: hidden biases derived from the training data provided, reliance on statistical truths over literal truths which may lack verifiability, and difficulty in diagnosis and correction during an error.

In decision making, AI systems may fall short in contextualisation, that is to understand and take into account the nuances of human culture. Such data would be rather difficult to derive, let alone to provide for training. That being said, Google Duplex is an indicator of making headways in overcoming such a challenge.

Further out into the future, AI systems may lead to high technological unemployment, as jobs may be made redundant as Heath implies. Such possibility is deemed as a more credible possibility than an existential threat posed by AIs, a concern shared by not merely science-fiction movies, but famed and intelligent people like Stephen Hawking and Elon Musk.

In between the two possibilities lie the various issues in moral and ethics, such as machine rights, machine consciousness, singularity and strong AI, and so on. But even closer to current times, we are currently dealing with ethics issues in our design of autonomous vehicles (which employ AI systems), commonly known as the “Trolley Problem”.

How do we respond?

There was a period in time which was categorised as the “AI winter”. It was the 1970s, and having seen little results from huge investments, public and private institutions pulled the plug in funding research for AI, specifically the AGI kind. It was in the 1980s that AI research was revived, thanks to business leaders like Ken Olsen who realised the commercial benefits of AI, and developed expert systems that are focused on narrow tasks.

Fast forward to today, AI is pervasive. Unknowingly, we may have been users of AI technology already. Part of the future imagined in the past is here. And for the most part, life has changed for the better.

Still, there is much room for AI application in businesses to generate value (although much of the talk focused on the subfield of machine learning). Companies should realise that, like desktop computer technology, the resolution of current flaws and issues in AI technology and the subsequent evolution of the technology can be accelerated with the support of adoption.

However, we as a society may need to strive in the grave and philosophical issues posed by AI, answering tough questions on the future of jobs and even the lives of people as AI gradually strengthens. And in the midst, ethical concerns continue to overhang, awaiting for us to address. Perhaps Partnership on AI, a foundation founded by tech giants like Google, IBM, Microsoft and Facebook, is a good place to start.


What is AI? Everything you need to know about Artificial Intelligence – ZDNet:

What is Artificial Intelligence And How Will It Change Our World? – Bernard Marr:

Artificial Intelligence – What it is and why it matters – SAS:

The Business of Artificial Intelligence – Harvard Business Review:

What Entrepreneurs Need To Know About AI (Artificial Intelligence) – Forbes:

Artificial Intelligence: The Next Digital Frontier? – McKinsey Global Institute:

iWonder – AI: 15 key moments in the story of artificial intelligence – BBC:

3 thoughts on “From What I Read: Artificial Intelligence

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s