5G: the definitive deal-breaker or just hot air of hype?

By now (as at time of writing in July November December 2019), unless you have been living under the rock for the past couple of years, you would be no stranger to the buzz about 5G telecommunication technology. In fact, as alluded in my 2018 year-ender post, 5G trials have been conducted in many places; to update on that, South Korea’s 5G services went live in April 2019 thus becoming one of the first countries to roll out 5G, while selected cities in the UK and US have seen 5G services provided by telco companies. Even more so, there are talks that the development of 6G is underway, even when 5G has yet to see mass-scale rollout.

Even so, given that there is little to know about 6G at the moment, and that 5G is still very much relevant, a brief discussion on 5G technology is therefore warranted in my opinion.

The Subject

What is 5G? Common sense and logic would make the sensible inference that 5G is an upgrade from 4G, which was an evolution from its predecessor, 3G – and so on. (After all, it is named such because it is the fifth generation of wireless network technology).

In fact, most introductory materials on 5G would at least do a short recap on how mobile wireless telecommunication technology advanced from one “G” to the other. The first generation of such technology was developed in the late 1970s and 1980s where analog radio waves carry unencrypted voice data – WIRED chronicled that anyone with off-the-shelf components could listen in on conversations.

Later on, the second “G” was developed in the 1990s which not only made voice calls safer, but provided more efficient data transfers via its digital systems. The third generation increased the bandwidth for data transfers which then allowed for simple video streaming and mobile web access – this ushered in the smartphone evolution as we know it. 4G came along in the 2010s with the exponential rise of the app economy driven by the likes of Google, Apple and Facebook.

The development of 5G was rather a longtime coming though – core specifications began in 2011, and would not complete until some 7 years later. But what does 5G entail?

The Pros and Problems

From the onset, 5G promised greater amount of connectivity speed – around 10 gigabits per second to be exact (about 1.25 gigabytes per second). At such (theoretical) speeds, they are 600 times faster than average 4G speeds on today’s mobile phones. That being said, current 5G tests and roll-outs showed that 5G speeds in practice would vary around 200 to 400 megabits per second. One test that is noteworthy would be Verizon’s 5G coverage in Chicago, where download speeds have shown to reach nearly 1.4 gigabits per second – the same may not be the case for upload speeds, and that the other major caveat concerned is the limited signal range.

5G primarily uses millimeter wave technology (with the range of the wireless spectrum between 30 GHz and 300 GHz), where it can transmit data at higher frequency and faster speeds, but also faces the major drawback of reliability in covering distances due to its extremely short wavelength and tiny wave form. With a dash of humour, this resulted in TechRadar having to perform a “5G shuffle” dance around the 5G network node when testing out the high speed in Chicago. On a more concrete note, this exemplifies the difficulty of rolling out 5G arising from the need to deploy massive amounts of network points. This meant that new network infrastructure may be required, vastly different from the existing ones which support 4G and prior network technology.

But like how we have overcome the various adversities and challenges in technology, assuming we can figure out and execute a viable deployment of 5G network (which is not a very futuristic assumption at the current rate of progress), 5G also entails greater number of devices being served at nearly real-time. This meant accelerated advancement in the field of Internet of Things, where more internet-connected devices and sensors can provide real-time data and execution – benefiting consumers and more so for various industries. This also meant autonomous vehicles, which heavily relies on real-time internet connectivity, would probably see the light of day in terms of realisation and adoption.

However, the conversations about 5G in the past couple of years have been on the “whos” and “hows” of infrastructure deployment. On this front, Chinese companies had reportedly had outpaced American counterparts in perfecting 5G network hardware capabilities. While China’s rate of technological advancement should not come as a surprise to anyone anymore, the achievements came in the light of recent concerns that these Chinese companies were involved in state-backed surveillance activities by introducing backdoors in its network equipment. The governments in the UK and US were grappling over the 5G infrastructure dilemma: to use infrastructure from Chinese companies with the risk of creating vulnerabilities to a foreign power, or to develop one’s own infrastructure which takes time and may not catch up with the economic powerhouse in time.

The other issue about the impending 5G roll-out worldwide surrounds the problem of consumer device compatibility. Similar to the times where 4G was first introduced, currently (in 2019) there are very limited number of devices with 5G network capabilities. And among those that do have such capabilities, CNET reported that these phones are limited to the millimeter wave spectrum band currently accessible for 5G, and are not open to other spectrum bands if and when they are incorporated with 5G in the future. The problem of device compatibility may evolve and resolve as 5G deployment goes on.

The Takeaways

The discussion about 5G is quite overdue, in the sense that mass roll-out of such infrastructure is underway and in the works, albeit according to the various timeframes each country has set. In fact, Malaysia is set to deploy 5G as early as second half of 2020 according to one report. It is then imperative, for industries to explore how 5G technology may be leveraged to achieve greater efficiencies and effectiveness.

And whilst 5G is just about to take off, there are already discussions about 6G, with research on the sixth generation of network technology being initiated by the Chinese government, a Finnish university, and companies such as Samsung. And going on how long each generation’s technology require to be developed, we would probably see 6G taking shape by the next decade of 2030.

As to answering the question set out in the title: is 5G a deal-breaker? Yes, but it was not in 2019, and maybe not yet in 2020. Mass infrastructure deployment must be complemented with industrial applications and use-cases in order to fully reap the potential 5G possess. But as how we have observed in previous trends of technology, this will fall in place inevitably – it is only a question of “when” and “how soon”.

With that, I shall end this post, which serves as the last post for the year (and the decade, if you are of the opinion that decades should begin with a 0 instead of 1).

And if I intend to continue writing this blog, see you in the next year (and the decade).

Robotic Process Automation: What Is It, and What It Brings

Whenever the term Robotic Process Automation (RPA) was mentioned, it is not hard to conjure images of cold, mechanical machines doing physical labour and replacing jobs in rendering human workers redundant. However, such perception could not be further from the truth, not just because of how the word “robotic” can be misleading, but also the lack in understanding what RPA is beyond the headlines.

The Subject

So what is RPA? Unlike many other topics discussed on this site here, there is one specific, official definition published by a governing authority (in this case, a diverse panel of industry participants). According to the IEEE Guide for Terms and Concepts in Intelligent Process Automation published by the IEEE Standards Association, RPA is defined as a “preconfigured software instance that uses business rules and predefined activity choreography to complete the autonomous execution of a combination of processes, activities, transactions, and tasks in one or more unrelated software systems to deliver a result or service with human exception management”.

Now, often the problem about standard definitions is that the meaning can often be lost in a sea of words. One site has actually cited this definition, and has to include a simpler analogy: software robots that mimics human action.

This, however, should not be confused with Artificial Intelligence (AI), which the same site likened to human intelligence being simulated by machines. In fact, RPA is illustrated in a lower rank than AI in a doing-thinking continuum, where RPA is more process-driven whereas AI is data-driven.

A doing-thinking continuum, with robotic process automation being in the middle-left under process driven, and artificial intelligence on the far right under data-driven.

So how does RPA works? Several sites have pointed out that RPA existed as an evolution from several technologies. Notably, the most cited technology in which RPA was evolved from is screen scraping, which is the collection of data displayed on screen usually from a legacy application to a more modern interface. Another technology cited is (traditional) workflow automation (or in this case, where a list of actions were programmed into the software to automate tasks while interacting with back-end systems through application programming interfaces (APIs) or scripting languages.

RPA, being evolved from those technologies, develops the list of actions through monitoring users performing the task in the Graphical User Interface (GUI) and then perform the automation through repeating the tasks on the GUI. Furthermore, RPA does not require a physical screen to operate as the actions would need to take place in a virtual environment.

The Pros and Cons

It’s not too hard to look at the continuum above (also called as the “Intelligent Automation Continuum”, albeit a simpler one) and relate the benefits and risks to that which have been discussed, such as Machine Learning and Artificial Intelligence. However, seeing RPA is more process-driven rather than data-driven, there would be difference in the benefits as well.

Multiple sources cited the benefit of achieving greater efficiency, as RPA is able to conduct repetitive tasks quickly around-the-clock with minimal error. With such efficiency, organisations that uses RPA may reap the benefits of cost savings from staffing, since such tasks no longer require the same number of staffing.

Some sites were more subtle on the message of reduced staffing, by pointing out that RPA may free up staff from monotonous and repetitive tasks to conduct more productive and high-value tasks that require creativity and decision making, or exploring the opportunity for people to be re-skilled and obtain new jobs in the new economy.

But just like the many other topics discussed on this site, human worker redundancy is the pink elephant in the room. According to estimates from Forrester Research, RPA software could displace 230 million or more knowledge workers, which is about 9 percent of the global workforce. Furthermore, in some cases, re-skilling displaced workers may not be within the organisational users’ consideration, since there may not be as many new jobs available for these displaced workers, not to mention that re-skilling may negate the cost saving benefits achieved. With that said, currently many organisations have already resorted to Business Process Outsourcing (BPO) for many current tasks which RPA is suited to deploy on, and hence displacement may be more serious in BPO firms.

Another benefit of RPA cited by certain sites is how RPA can be used without the need for huge customisation to systems and infrastructure. Since RPAs are generally GUI-based, they do not require deep integration with systems or alterations of the infrastructure, and are supposedly easy to implement. In fact, automation efforts can be boosted by combining RPA with other cognitive technologies such as Machine Learning and Natural Learning Processing.

That being said, RPA’s dependency on systems’ user interfaces carries a risk from obsolescence. RPA interacts with the user interface exactly as how it monitors/programmed to do, and when there are changes to the interface, the RPA would break down. And remember, RPA is also reliant to the exactness of data structures and sources, rendering RPA rather inflexible. This inflexibility is a stark contrast to how easily humans can adjust behaviour to changes as they arise.

Then there are APIs. Modern applications usually have APIs which are a more “resilient approach” in interacting with back-end systems to automate processes, relative to the brittleness RPAs had to face from the limitation described earlier. Furthermore, APIs may be seen as a more favourable option in an end-to-end straight through processing ecosystem involving multiple operating systems and environment.

The Takeaway

There are many use cases for RPA these days, that it is not exactly a new topic. Plus, with the criticism of dependency on features which may change or become obsolete, RPAs many not seem as alluring these days. In fact, some rule of thumb is to consider whether the processes could be processed straight-through with existing capabilities, before resorting to RPA.

Organisations should identify tasks that RPA may be applied and remain relevant in years to come before making the decision. Others would advise a more broad-based approach in investing automation – to consider the whole continuum instead of expecting RPA as the silver bullet to operational efficiency.

As for the redundancy problem, it has been the recurring theme in this age of digitalisation. Reiterate several posts written here, the society as a whole needs to confront with such issues and answer grave, philosophical questions concerning human jobs and roles in the future. It is an essential discourse to take place, in which is not happening enough with due significance unfortunately. And if we were to take reference from history, not doing much is simply equivalent to a Luddite approach.

Fourth Industrial Revolution vs. Industry 4.0: Same but different?

It is the new year of 2019, and initially, for the first post of the year, I would like to write about a key concept which would be an underlying tone for the year(s) ahead, and the backdrop for some of the topics I may be featuring in this blog in this year (besides having already featured some topics in 2018). The terms “Fourth Industrial Revolution” and “Industry 4.0” have become buzzwords of the century, and gone out from being mere jargons used by management consultants to widespread use in various industries.

But then as I realised, even though we have heard these two terms “Fourth Industrial Revolution” and “Industry 4.0” being interchangeably used, the equivalence might not be as strong as we may perceive.

With that, let’s get right into what entails within these terminologies.

Fourth Industrial Revolution

The term “Fourth Industrial Revolution” (4IR) was first coined by World Economic Forum founder and executive chairman Klaus Schwab in 2015, when he compared technological progress with the industrial revolutions previously happened.

Source: https://online-journals.org/index.php/i-jim/article/viewFile/7072/4532

Of course, there are various definitions and descriptions on the 4IR, but they all point to an industrial transformation “characterized by a range of new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human” as the World Economic Forum points out.

Schwab has highlighted that the 4IR is not a mere extension of the Third Industrial Revolution that spanned from the 1960s to the end of the 20th century, as it was different in velocity, scope and systems impact. He pointed that the speed of technological progress under 4IR would be exponential rather than linear, that the scope is wider with every industry being affected, and that it would be impacting systems of production, management and governance in a transformative way.

Among the technologies that were cited under 4IR are “artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing “.

Schwab has also identified the opportunities and challenges that underline 4IR. 4IR may bring about an improvement in global income levels and quality of life for people across the world through greater accessibility to affordable digital services, and the efficiencies brought by these services. Relating to the point of efficiency, businesses may stand to benefit from more effective communication and connectivity through technological innovation under 4IR.

However, 4IR presents the concerns on widening existing inequality. Given that automation and digitalisation can, if not already have, substitute and displace workers, this might “exacerbate the gap between returns to capital and return to labour”. In other words, low-skilled workers which generally come from the poorer segment of the society would increasingly face scarce job opportunities, while the owners of capital (in this case, the automation, robotic, digital systems) – mainly innovators, shareholders and investors – would be exemplifying the colloquial term of “the rich gets richer”. Considering how much of the anxieties and discontent in this current age is very much fueled by inequality, perceived or otherwise, the problem of growing inequality is certainly a growing problem.

Nonetheless, Schwab reminded that all of us are responsible for guiding this evolution, and that 4IR can be a complement to the “best of human nature”, bringing the human race to a new level of moral consciousness based on a shared sense of destiny.

Industry 4.0

So how is Industry 4.0 different from 4IR? To begin with, the source of where the term comes from is different in time and place.

The term Industry 4.0 found its origins from Germany’s Industrie 4.0, which is a high-tech strategy by the German government to promote technological innovation in product and process technologies within the manufacturing industry. The Malaysian Ministry of International Trade and Industry defines Industry 4.0 as “production or manufacturing based industries digitalisation transformation, driven by connected technologies”.

At the core of Industry 4.0 lies several foundational design principles:

  1. interconnection between people, systems and devices through Internet of Things;
  2. information transparency to provide users with plenty useful information to make proper decisions;
  3. technical assistance to support aggregation and visualisation of information for real-time decision making and problem solving, as well as to help users conduct unfeasible tasks;
  4. decentralised decisions made autonomously.

From the design principles, it entails that Industry 4.0 heavily involves specific types of technology, mainly those involved in achieving inter-connectivity and automation. Cleverism cited a research which identified the main types of technology involved under Industry 4.0, and outlined four main components: Cyber-Physical Systems, the Internet of Things, the Internet of Services and Smart Factory.

Given Internet of Things have been widely discussed (even on this site), let’s have a brief look at what the other terms entail:

  • Cyber-Physical Systems aim at integrating computation and physical processes, where physical processes are able to be monitored by devices over a network. Developing such systems would involve unique identification of objects throughout processes, the development of sensors and actuators for exchange of information, and the integration of such sensors and actuators.
  • The Internet of Services looks at how connected devices (under Internet of Things) can become an avenue of value creation (and revenue generation) for manufacturers.
  • A smart factory is a manufacturing plant that puts the aforementioned concepts together, by adopting a system that is aware of the surrounding environment and the objects within it. As the research paper mentioned, “the Smart Factory can be defined as a factory where Cyber-Physical Systems communicate over the Internet of Things and assist people and machines in the execution of their tasks”.

The benefits and challenges of Industry 4.0 would be similar to that of 4IR, with some having more specific focus on how it would impact the manufacturing industry.

One of such benefits is allowing manufacturers to offer customisation to customers. Industry 4.0 would empower manufacturers with the flexibility in responding to customer needs through inter-connectivity, Internet of Things and Internet of Services. Having the connectivity with consumer devices through Internet of Things, manufacturers would have access to consumer behaviours and needs in a seamless manner, and therefore has the potential to cater to unique consumer demands quicker than conventional go-to-market methods.

On the flip side, one greatest challenge Industry 4.0 would face is security and privacy. With great powers of connectivity comes great responsibility in ensuring data transmitted across these connections are protected. Similar to the challenges of security discussed in the Internet of Things article, the challenges of security also apply to manufacturers, and ever more so considering how processing methods are trade secrets in most manufacturing industries. On the other hand, as manufacturers increasingly assume the role as collectors of consumer data under Industry 4.0, this would increase consumers’ concern on how their data might be handled and used.

Still, despite the challenges, the future is bright for Industry 4.0 due to the promises of process efficiencies it bring, which is why the Malaysian government is seemingly convinced and committed to this technological trend, and acknowledged the need for industries to transform accordingly through its 2019 Fiscal Budget allocation for SMEs adopting automation technology.

Final Thoughts

And in conclusion, even though both terms of “Fourth Industrial Revolution” and “Industry 4.0” might have been used interchangeably, it is rather clear that these two terms have different focus. Some suggested that Industry 4.0 is a more relevant discussion on technological progress rather than the concept of 4IR, while others said that Industry 4.0 can be considered as a subset under the Fourth Industrial Revolution.

At the end of the day though, it is upon us to figure out how we should envision the future ahead with 4IR and Industry 4.0 by resolving pertinent issues surrounding personal data ethics and the future of workforce.

Featured Image credit: Christoph Roser at AllAboutLean.com

From What I Read: Deep Learning

(If you came because of the Bee Gees’ excerpt, congratulations – you’ve just been click-baited.)

Recently, I came across a video on my Facebook news feed, which showed several Hokkien phrases used by Singaporeans – one of which was “cheem”, literally “deep” in English. It is usually used to describe someone being very profound or complex, usually in content and philosophy.

I perceive that despite the geographical differences, there is somewhat a common understanding between the East and the West on the word “deep”. The English term “shallow” means simplistic apart from the lack of physical depth, and so is the phrase “skin deep”.

Of course, the term “deep learning” (DL) does not simply derive from the word “deep” being complicated, but certainly the method of DL is nothing short of being complex.

For this post, I would do the write-up in a slightly different manner – an article of reading will be the “anchor” article in answering each section, and then readings of other articles will be added on to the foundation laid by the “anchor”. For those who pay attention, you would notice the pattern.

My primarily readings will be from the following: Bernard Marr (through Forbes), Jason Brownlee, MATLAB & Simulink, Brittany-Marie Swanson, Robert D. Hof (through MIT Technology Review), Radu Raicea (through freecodecamp.org), and Monical Anderson (through Artificial Understanding and a book by Lauren Huret). As usual, the detailed references are included below.

What is the subject about?

(Now before I go into the readings, I wanted to bring back to how the term “deep learning” was first derived. It was first appeared in academic literature in 1986, when Rina Dechter wrote about “Learning While Searching in Constraint-Satisfaction-Problems” – the paper introduced the term to Machine Learning, but did not shed light on what DL is more commonly known today – neural networks. It was not until the year 2000 that the term was introduced to neural network by Aizenberg & Vandewalle.)

Tracing back to my previous posts, DL is a subset of Machine Learning, which itself is a subset of Artificial Intelligence. Marr pointed out that while Machine Learning took several core ideas of AI and “focuses them on solving real-world problems…designed to mimic our own decision-making”, DL puts further focus on certain Machine Learning tools and techniques in applying to solve “just about any problem which requires “thought” – human or artificial”.

Brownlee offered a different dimension to the definition of DL: “a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks”. This definition offered was supported by several more researchers cited in the article, some of them are:

  • Andrew Ng (“The idea of deep learning as using brain simulations, hope to: make learning algorithms much better and easier to use; make revolutionary advances in machine learning and AI”)
  • Jeff Dean (“When you hear the term deep learning, just think of a large deep neural net. Deep refers to the number of layers typically…I think of them as deep neural networks generally”)
  • Peter Norvig (“A kind of learning where the representation you form have several levels of abstraction, rather than a direct input to output”)

The article as a whole was rather academic in nature, but also offered a simplified summary: “deep learning is just very big neural networks on a lot more data, requiring bigger computers”.

The description of DL as a larger-scale, multi-layer neural network was supported by Swanson’s article. The idea of a neural network mimicking a human brain was reiterated in Hof’s article.

How does it work?

Marr described how DL works as having a large amount of data fed through “logical constructions asking a series of binary true/false questions, or extract a numerical value, of every bit of data which pass through them, before classifying them according to the answers received” known as neural networks, in order to make decisions about other data.

Marr’s article gave an example of a system designed to record and report the number of vehicles of a particular make and model passing along a public road. The system would first fed with a large database of car types and their details, of which the system would process (hence “learning”) and compare with data from its sensors – by doing so, the system could classify the type of vehicles that passed by with some probability of accuracy. Marr further explained that the system would increase that probability by “training” itself with new data – and thus new differentiators – it receives. This, according to Marr, is what makes the learning “deep”.

Brownlee’s article, through its aggregation of prior academic researches and presentations, pointed out that the “deep” refers to the multiple layers within the neural network models – of which the systems used to learn representations of data “at a higher, slightly more abstract level”. The article also highlighted the key aspect of DL: “these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure”.

Raicea illustrated the idea of neural networks as neurons having grouped into three different types of layers: input layer, hidden layer(s) and output layer – the “deep” would refer to having more than one hidden layer. The computation is facilitated by connections between neurons that are associated with a (randomly set) weight which dictates the importance of the input value. The system would iterate through the data set and compare the outputs to see how much it is far off from the real outputs, before readjusting the weights between neurons.

How does it impact (in a good way)?

Marr cited several applications of DL that are currently deployed or under work-in-progress. DL’s use-case in object recognition would enhance the development of self-driving cars, while DL techniques would aid in the development of medicine “genetically tailored to an individual’s genome”. Something closer to the layman and Average Joe, DL systems are empowered to analyse data and produce reports in natural-sounding human language with corresponding infographics – this could be seen in some news reports generated by what we know as “robots”.

Brownlee’s article did not expound much on the use-cases. Nevertheless, it highlighted that “DL excels on problem domains where the input (and even output) are analog”. In other words, DL does not need data to come in numerical and in tables, and neither should the data it produces – offering a qualitative dimension to analysis as compared to conventional data analysis.

Much of the explicit benefits were discussed in the prior posts on Machine Learning and Artificial Intelligence.

What are the issues?

Brownlee recapped the prior issues of DL in the 1990s through Geoff Hinton’s slide: back then, datasets were too small, computing power was too weak, and generally the methods of operating it were improper. MATLAB & Simulink pointed out that DL became useful because the first two factors of failures have seen great improvements over time.

Swanson briefly warned on the issue of using multiple layers in the neural network: “more layers means your model will require more parameters and computational resources and is more likely to become overfit”.

Hof cited points raised by DL critics, chiefly on how the development of DL and AI in general have deviated away from putting into consideration how an actual brain functions “in favour of brute-force computing”. An example was captured by Jeff Hawkins on how DL failed to take into consideration the concept of time, in which human learning (which AIs supposed to emulate) would depend on the ability to recall sequences of patterns, and not merely still images.

Hof also mentioned that current DL applications are within speech and image recognition, and to extend the applications beyond them would “require more conceptual and software breakthroughs” as well as advancements in processing power.

Much of other DL’s issues were rather similar to those faced by Machine Learning and Artificial Intelligence, in which I have captured accordingly in the previous posts. One of the recurring themes would be how inexplicable DL systems get to its output, or in the words of Anderson’s article, “the process itself isn’t scientific”.

How do we respond?

Usually, I would comment in this section with very forward-looking, society-challenging calls for action – and indeed I have done for the post on AI and Machine Learning.

But I would like to end with a couple of paragraphs from Anderson in a separate publication, which captured the anxiety about AI in general, and some hope for DL:

“A computer programmed in the traditional way has no clue about what matters. So therefore we have had programmers who know what matters creating models and entering these models into the computer. All programming is like that; a programmer is basically somebody who does reduction all day. They look at the rich world and they make models that they enter into the computer as programs. The programmers are intelligent, but the program is not. And this was true for all old style reductionist AI.

… All intelligences are fallible. That is an absolute natural law. There is no such thing as an infallible intelligence ever. If you want to make an artificial intelligence, the stupid way is to keep doing the same thing. That is a losing proposition for multiple reasons. The most obvious one is that the world is very large, with a lot of things in it, which may matter or not, depending on the situations. Comprehensive models of the world are impossible, even more so if you considered the so-called “frame problem”: If you program an AI based on models, the model is obsolete the moment you make it, since the programmer can never keep up with the constant changes of the world evolving.

Using such a model to make decisions is inevitably going to output mistakes. The reduction process is basically a scientific approach, building a model and testing it. This is a scientific form of making what some people call intelligence. The problem is not that we are trying to make something scientific, we are trying to make the scientist. We are trying to create a machine that can do the reduction the programmer is doing because nothing else counts as intelligent.

… Out of hundreds of things that we have tried to make AI work, neural networks are the only one that is actually going to succeed in producing anything interesting. It’s not surprising because these networks are a little bit more like the brain. We are not necessarily modeling them after the brain but trying to solve similar problems ends up in a similar design.”

Interesting Video Resources

But what *is* a Neural Network? | Chapter 1, deep learning – 3Blue1Brown: https://youtu.be/aircAruvnKk

How Machines *Really* Learn. [Footnote] – CGP Grey: https://www.youtube.com/watch?v=wvWpdrfoEv0

References

What Is The Difference Between Deep Learning, Machine Learning and AI? – Forbes: https://www.forbes.com/sites/bernardmarr/2016/12/08/what-is-the-difference-between-deep-learning-machine-learning-and-ai/#394c09c726cf

What is Deep Learning? – Jason Brownlee: https://machinelearningmastery.com/what-is-deep-learning/

What Is Deep Learning? | How It Works, Techniques & Applications – MATLAB & Simulink: https://www.mathworks.com/discovery/deep-learning.html

What is Deep Learning? – Brittany-Marie Swanson: https://www.datascience.com/blog/what-is-deep-learning

Deep Learning – MIT Technology Review: https://www.technologyreview.com/s/513696/deep-learning/

Want to know how Deep Learning works? Here’s a quick guide for everyone. – Radu Raicea: https://medium.freecodecamp.org/want-to-know-how-deep-learning-works-heres-a-quick-guide-for-everyone-1aedeca88076

Why Deep Learning Works – Artificial Understanding – Artificial Understanding: https://artificial-understanding.com/why-deep-learning-works-1b0184686af6

Artificial Fear Intelligence of Death. In conversation with Monica Anderson, Erik Davis, R.U. Sirius and Dag Spicer – Lauren Huret: https://books.google.com.my/books?id=H0kUDAAAQBAJ&dq=all+intelligences+are+fallible&source=gbs_navlinks_s

From What I Read: Machine Learning

Let’s be up front here: my introduction section for the Artificial Intelligence post stole quite a lot of limelight from the remaining posts within the AI series (since this post is the subset of AI, and the next post is possibly the subset of this post), so I will not bother to think too much about coming up with an introduction with a “bang”.

The other disclaimer here being that this post was not how I envisioned a month ago. This is mainly because as I search out on the topic further, there are more and deeper ways to explain the topic (and did I say varied too?). And this extends beyond reading materials – there are several kinds of videos out there which aims to explain the subject (in varied edutainment levels).

But in the interest of time, and effort, I will try to be rather layman-ish and bare-bone-ish in the approach in handling the subject of Machine Learning. But I will include links to resources I find interesting (and may not end up being used for this post) at the end of this post.

My readings for this post in from Bernard Marr on Forbes, MATLAB & Simulink, Experts System, Yufeng Guo on Towards Data Science, Danny Sullivan on MarTech Today, SAS and several Quora replies.

What is the subject about?

So what is machine learning (ML)?

It is rather widely acknowledged that ML is a subset of Artificial Intelligence (AI), and so, from a concept level, it would bear similarities to the goals of AI: to mimic humans’ intelligence as machines. On a subset level, as Marr mentioned in his article, ML seeks to “teach computers to learn in the same way we do” through interpreting information, classifying them, and learn from successes and failures.

Such a description is concurred by an article from MATLAB & Simulink (M&S), which stated that ML is a “data analytics technique that teaches computers to…learn from experience”, even adding that this learning method “comes naturally to humans and animals”.

So what does “learning from experience” and “learning from successes and failures” underline? They imply the absence of explicit programming from a programmer, as Experts System’s (ES) article explained, and further added the idea of automation in the learning process.

Guo took a different approach by defining ML as “using data to answer questions”, outlining the idea of training from an input (“data”) and the outcome of making predictions or inferences (“answer questions”). Guo further mentioned that the two parts in the definition is connected by analytical models, in which SAS’ article also highlighted.

To conclude this section, we can connect the two approaches of defining ML, sloppily amalgamate as “a data analytics technique that teaches computers to learn automatically through experiences by using data, ultimately to answer questions through inferences and predictions”.

How does it work?

In explaining how ML works, many of the articles in review would mention the two types of techniques under ML, namely supervised learning and unsupervised learning.

As M&S’ article puts it, supervised learning develops predictive models based on both input and output data to predict future outputs. Such examples of application include handwriting recognition (which leverages on classification techniques like discriminant analysis and logistic regression) and electricity load forecasting (which uses regression techniques like linear and nonlinear model and stepwise regression).

Unsupervised learning seeks to find hidden patterns or intrinsic structures in input data through grouping and interpreting the data – there is no output data involved. This type of technique is usually used for exploratory data analysis, and would see applications in object recognition, gene sequence analysis and market research. The M&S’ article cited the clustering technique as the most common unsupervised learning technique, which uses algorithms the likes of hierarchical clustering and hidden Markov models. In short, unsupervised learning is good for splitting data into clusters.

ES added several dimensions to the types of techniques and ML algorithms to shed more light on how ML works, namely on semi-supervised ML algorithms (falling between supervised and unsupervised learning which uses labeled (input data with accompanying output data) and unlabeled data for training to improve learning accuracy) and reinforcement ML algorithms (interacting with the environment by producing actions and discovering errors or rewards to determine the ideal and optimised behavior within a specific context).

img_0934-800x469

Sullivan’s article mentioned about the three major parts that makes up ML systems, namely the model (the system that makes predictions/identifications), the parameters (the factors used by the model to produce decisions) and the learner (the system that adjusts the parameters and subsequently the model by looking at differences in predictions versus actual outcome.

Such a way to explain the workings of ML systems bears similarity to how CGP Grey explains in his video which I find rather interesting.

Guo outlined 7 steps of ML in his separate article:

  1. Data gathering
  2. Data preparation
  3. Model selection
  4. Model training
  5. Model evaluation
  6. (Hyper)Parameter Tuning
  7. Model prediction

Most of the steps are pretty much similar to how Sullivan’s article described and implied, including the step of training, in which Sullivan described as the “learning part of machine learning” and “rinse and repeat” – these process would reshape the model to refine the predictions.

Again, this is not a technical post, so I would spare you with too much details. I would, however, include links to a few videos for you to watch should you be interested.

And finally, on Quora, there is also a response to break down how ML works on a very relatable manner – that machines are trying to do how we are doing tasks, but with infinite memory and speed of handling millions of transactions every second.

How does it impact (in a good way)?

Many of us would have been experiencing the applications of ML unknowingly everyday. Take YouTube’s Video Recommendations system, which relies on algorithms and the input data – your search and watch history. The model is further refined with other inputs such as the “Not interested” button you clicked on some of their recommendations, and percentage of video watched (perhaps).

And speaking of recommendations, how can we not include the all-too-famous Google search engine and its results recommendations? And speaking of Google, how can we not bring to mind their Google Translate feature which allows users to translate languages through visual input?

So certainly, the use case for ML is quite prevalent in these areas that the public-at-large is familiar of.

M&S outlined several other areas where ML has become a key technique to solve problems, such as credit scoring in assessing credit-worthiness of borrowers, motion and object detection for automated vehicles, tumour detection and drug discovery in the field of biology, and predictive maintenance for manufacturing.

SAS’ article highlighted that ML could enable for faster and more complex data analysis with better accuracy, while also being able to process large amount of data coming from data mining and affordable data storage.

And when ML is able to do certain tasks which would have required humans to do in the past, that would mean cost savings for the enterprises involved. This though provided a nice segue way to the next section.

What are the issues?

Now, call me lazy if you want, but as I have mentioned earlier: since ML is a subset of AI, there are several issues AI faced that would be faced by ML, such as the problem of input data quality (both accuracy and biaslessness), and the difficulty in explaining how the model may reached to its conclusion (especially if it involved deploying neural network technique).

To also reiterate from the previous post, we may foresee jobs being displaced as tasks can be increasingly automated and taken over more efficiently by ML systems. That being said, the half-glass-full view of the situation is that the job functions have been augmented and changed – if we could get the workforce to adapt to these job functions, the impact could be minimised.

As ML becomes widely adopted, there would be a greater demand of skilled resources. This sounded like a solution to the half-glass-full view mentioned, but seeing that the field of ML technology is still relatively new, it would probably mean higher cost and difficulty in acquiring expertise in ML, let alone to train the existing workforce.

And as ML become increasingly and widely used, the hunger for data would become more insatiable. We as a society may increasingly find ourselves to address the question on how much personal data should we be sharing as Doromal writes in his Quora reply to a question.

But to get to wide adoption, there is a need for the democratisation of ML, since presently investments in ML can be hefty, and hence the exclusivity of ML whereby more advanced systems would be available to users that could afford.

How do we respond?

My answer to this question would not run far from what was posed in the post on AI. But to add on to that, as I have mentioned in the earlier section, we as a society would need to take a hard look at how do we perceive data privacy, since ML is dependent on the availability of data to form better predictions and inferences.

There is growing interest among companies on ML upon seeing the benefits it can reap. Perhaps through the high demand that there will be a greater push in the development and subsequent democratisation of the technology. That said, companies need to find the balance between deployment of ML and managing their workforce which may be increasingly redundant.

The teaching and learning of ML should become more widespread to meet the increased need of such skilled workforce, while a better level of awareness about ML among individuals of the society would also be needed in the future to come in order to understand how certain automated decisions they will face are derived from.

ML is unlike the other topics mentioned in this blog, in that the technology is already here and now today, up and running (while things like ICO and even the commercial use of blockchain is still yet to be seen). And as implied and mentioned, the applications have already become rather prevalent today. Individuals in the society however are still probably some way off from having a good understanding about ML, but that would probably be changed soon as widespread automation increasingly creeps and looms on the horizon.

Interesting video resources

How Machines Learn – CGP Grey: https://youtu.be/R9OHn5ZF4Uo

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34 – CrashCourse: https://www.youtube.com/watch?v=z-EtmaFJieY

What is Machine Learning? – Google Cloud Platform: https://www.youtube.com/watch?v=HcqpanDadyQ

But what *is* a Neural Network? | Chapter 1, deep learning – 3Blue1Brown: https://youtu.be/aircAruvnKk

References

What Is Machine Learning – A Complete Beginner’s Guide In 2017 – Forbes: https://www.forbes.com/sites/bernardmarr/2017/05/04/what-is-machine-learning-a-complete-beginners-guide-in-2017/#43fbce66578f

What Is Machine Learning? | How It Works, Techniques & Applications – MATLAB & Simulink: https://www.mathworks.com/discovery/machine-learning.html

What is Machine Learning? A definition – Expert System: https://www.expertsystem.com/machine-learning-definition/

How Machine Learning Works, As Explained By Google – MarTech Today: https://martechtoday.com/how-machine-learning-works-150366

How do you explain Machine Learning and Data Mining to non Computer Science people? – Quora: https://www.quora.com/How-do-you-explain-Machine-Learning-and-Data-Mining-to-non-Computer-Science-people

Machine Learning: What it is and why it matters | SAS: https://www.sas.com/en_my/insights/analytics/machine-learning.html

What is Machine Learning? – Towards Data Science: https://towardsdatascience.com/what-is-machine-learning-8c6871016736

The 7 Steps of Machine Learning – Towards Data Science: https://towardsdatascience.com/the-7-steps-of-machine-learning-2877d7e5548e

5 Common Machine Learning Problems & How to Beat Them – Provintl: https://www.provintl.com/blog/5-common-machine-learning-problems-how-to-beat-them

What are the main problems faced by machine learning engineers at Google? – Quora: https://www.quora.com/What-are-the-main-problems-faced-by-machine-learning-engineers-at-Google

An Honest Guide to Machine Learning: Part One – Axiom Zen Team – Medium: https://medium.com/axiomzenteam/an-honest-guide-to-machine-learning-2f6d7a6df60e

These are three of the biggest problems facing today’s AI – The Verge: https://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks

From What I Read: Artificial Intelligence

“Doesn’t look like anything to me.”

If you get the reference of that quote, let’s do a virtual high-five, for I have found another fellow fan of the HBO series, Westworld.

When talking about Artificial Intelligence, or AI, it is often too easy for the layman to think of human-like robots, like those from Westworld. While it is true that robots with high cognitive function do operate on AI, robots are far from being representative of what AI is.

So, what is AI? And like how my other posts go, we will explore how AI works, AI’s impact and issues, and how we should respond.

My readings for this article is mainly from SAS, McKinsey, Nick Heath on ZDNet, Bernard Marr, Erik Brynjolfsson and Andrew McAfee on Harvard Business Review, and Tom Taulli on Forbes.

What is the subject about?

The definition of AI seems to be rather fluid, as some of the articles pointed out. But one thing is for sure: the phrase was first coined by Minsky and McCarthy in their Darthmouth College summer conference paper in 1956. Heath summarised the idea by Minsky and McCarthy on AI as “any task performed by a program or a machine that, if a human carried out the same activity … the human had to apply intelligence to accomplish the task”. (Click here if you are interested in the proposal paper.)

Such broadness of the initial definition for AI unfortunately meant the debate on what constitutes as AI would be far and wide.

Subsequent definitions did not do much in refining the original definition further. McKinsey referred AI to the ability of machines in exhibiting human-like intelligence, while Marr perceived AI as “simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn – using the digital, binary logic of computers”.

In short, machines emulating human in intelligence.

However, a comment under Heath’s article shed some interesting light on the understanding of AI.

comment

How does it work?

Reiterating from the comment, “AI is a complex set of case statements run on a massive database. These case statements can update based on user feedback on how valid the results are”.

Such definition would not be too far off from a technical definition of AI. Andrew Roell, a managing partner at Analytics Ventures, was quoted in Taulli’s article in describing AI as computers being fed with algorithms to process data leading to certain desired outcomes.

Obviously, two components are required for AI to work: algorithm and data. However, what makes AI different from an ordinary piece of software is the component of learning. SAS described AI’s working as “combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data”.

Of course, the intricacies of AI is wide considering the various subfields within, such as Machine Learning, Deep Learning, Cognitive Computing and Natural Language Processing. Some of these topics would be discussed in future posts. But for now, it is suffice to say that these methods analyse a variety of data to achieve a certain goal.

There is also another way to categorise the research and development work in AI. Heath and Marr pointed out that there are two main branches of AI. Narrow AI, or as Marr put it, “applied/specialised AI” would be more common to ordinary people like us through its widespread application (think Apple’s Siri and the Internet of Things), since the AI simulates human thought to carry out specific tasks (having been learned or taught without being explicitly programmed).

The other branch of AI is “general AI”, or artificial general intelligence (AGI). General AI seeks to carry out a full simulation of the adaptable intellect found in humans – that is, being capable of learning how to carry out various vastly different tasks and to reason on wide-ranging topics based on accumulated experience, as Heath and Marr pointed out. Such intelligence requires a high amount of processing power to match that of human’s cognitive performance, and AGI being a reality is rather a story in the distant future. Others however would argue that given the evolution of processing technology, supported by further development in the integration between multiple different Narrow AI, AGI may not be too far away, as indicated by IBM’s Watson.

How does it impact (in a good way)?

Even though AI sounded like a buzzword in recent times, the applications can be traced to quite a while back. As an example, the Roomba vacuum (that circular robot vacuum cleaner that whizzed across the room) is an application of AI that leveraged on sensors and sufficient intelligence to carry out the specific task of cleaning a home – this was first conceived in 2002, 16 years ago (from point of writing). 5 years earlier, IBM’s Deep Blue machine defeated world chess champion Garry Kasparov.

As mentioned earlier, the application of narrow AI is widespread, since the scope here is to carry out specific tasks. Heath pointed out several use-cases such as interpreting video feeds from drones carrying out visual inspections of infrastructure, organising calendars, chatbots to respond to simple queries from customers and assisting radiologists to spot potential tumors in X-ray. Brynjolfsson and McAfee on the other hand highlighted the advances in voice recognition (Siri, Alexa, Google Assistant) and image recognition (think about Facebook recognising your friend’s faces from your photos in suggesting to tag).

If you notice, I have left out some cognition part of the application, which I shall reserve for the Machine Learning article (and other articles) in the future.

In the world of business, AI may help businesses to deliver enhanced customer experience by customising offerings based on data of customer preference and behaviour, as indicated by McKinsey. AI may also help to provide smarter research and development through better error detection, and provide forecasting of supply and demand to optimise production, in manufacturing.

What are the issues?

Going back to the core components of AI, you will see that one of the main dependency of AI is data. It goes without mention then, that quality data produces quality AI, and inaccuracies in the data will be reflected accordingly in the results.

The other issue that current AI systems face is that many of them falls under the narrow AI category, which could only carry out  specialised and clearly defined tasks. SAS pointed out the example of an AI system that detects healthcare fraud cannot also detect tax fraud or warranty claims fraud. The AI system is dependent on the defined task and scope that it was trained.

Brynjolfsson and McAfee’s article identified three risks brought about by the difficulty in humans understanding how AI systems reached to certain decisions, given that advanced AI systems like deep neural networks have a complex decision making process, and that they could not articulate the rationale behind those decisions even when they gather a wealth of knowledge and data. The three risks are: hidden biases derived from the training data provided, reliance on statistical truths over literal truths which may lack verifiability, and difficulty in diagnosis and correction during an error.

In decision making, AI systems may fall short in contextualisation, that is to understand and take into account the nuances of human culture. Such data would be rather difficult to derive, let alone to provide for training. That being said, Google Duplex is an indicator of making headways in overcoming such a challenge.

Further out into the future, AI systems may lead to high technological unemployment, as jobs may be made redundant as Heath implies. Such possibility is deemed as a more credible possibility than an existential threat posed by AIs, a concern shared by not merely science-fiction movies, but famed and intelligent people like Stephen Hawking and Elon Musk.

In between the two possibilities lie the various issues in moral and ethics, such as machine rights, machine consciousness, singularity and strong AI, and so on. But even closer to current times, we are currently dealing with ethics issues in our design of autonomous vehicles (which employ AI systems), commonly known as the “Trolley Problem”.

How do we respond?

There was a period in time which was categorised as the “AI winter”. It was the 1970s, and having seen little results from huge investments, public and private institutions pulled the plug in funding research for AI, specifically the AGI kind. It was in the 1980s that AI research was revived, thanks to business leaders like Ken Olsen who realised the commercial benefits of AI, and developed expert systems that are focused on narrow tasks.

Fast forward to today, AI is pervasive. Unknowingly, we may have been users of AI technology already. Part of the future imagined in the past is here. And for the most part, life has changed for the better.

Still, there is much room for AI application in businesses to generate value (although much of the talk focused on the subfield of machine learning). Companies should realise that, like desktop computer technology, the resolution of current flaws and issues in AI technology and the subsequent evolution of the technology can be accelerated with the support of adoption.

However, we as a society may need to strive in the grave and philosophical issues posed by AI, answering tough questions on the future of jobs and even the lives of people as AI gradually strengthens. And in the midst, ethical concerns continue to overhang, awaiting for us to address. Perhaps Partnership on AI, a foundation founded by tech giants like Google, IBM, Microsoft and Facebook, is a good place to start.

References

What is AI? Everything you need to know about Artificial Intelligence – ZDNet: https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/

What is Artificial Intelligence And How Will It Change Our World? – Bernard Marr: https://www.bernardmarr.com/default.asp?contentID=963

Artificial Intelligence – What it is and why it matters – SAS: https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html

The Business of Artificial Intelligence – Harvard Business Review: https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence

What Entrepreneurs Need To Know About AI (Artificial Intelligence) – Forbes: https://www.forbes.com/sites/tomtaulli/2018/05/05/what-entrepreneurs-need-to-know-about-ai-artificial-intelligence/#e4d978543b5d

Artificial Intelligence: The Next Digital Frontier? – McKinsey Global Institute: https://www.mckinsey.com/~/media/mckinsey/industries/advanced%20electronics/our%20insights/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/mgi-artificial-intelligence-discussion-paper.ashx

iWonder – AI: 15 key moments in the story of artificial intelligence – BBC: http://www.bbc.co.uk/timelines/zq376fr

From What I Read: Blockchain

Blockchain – a topic that is kind of overshadowed by its famous application in the past couple of years, and could fundamentally change the world as we know it. But ask someone on the street to try explaining the concept, and more likely than not it would be met with blank stares and stuttered response. So, I will try to communicate some understanding about the concept in 5 main points.

My readings will be from WIRED, Harvard Business Review, Bloomberg, and two other individual authors – Hayley Somerville (https://www.linkedin.com/in/hayleysomerville/) and Tony Yin (https://www.linkedin.com/in/tonyin/). The references will be included below.

What is the subject about?

When it comes to explaining blockchain, the technical definition would obviously involve the words “block” and “chain”. But on a more general and conceptual level, it is explained as an “open, distributed/decentralised, digital ledger of transactions”.  And for those who may not be that familiar, ledger is like a notebook to record transactions, and previously was more common in the world of accounting (and still is).

So how does a notebook containing transactions be open and distributed? While the more intricate details will be explained in the next point, for now we will understand it as such: the ledger is replicated, and an identical copy is stored on each computer that makes up the blockchain network – and when there are changes to one copy, all other copies will be updated simultaneously.

How does it work?

Harvard Business Review has nicely outlined 5 basic principles in explaining how blockchain works:

  1. Distributed Database: In a blockchain network (made up of multiple parties on the computers), each party has access to the whole database and its complete history. Distributed also means no single party controls the data or the information, but every party verifies the records in the database without any middleman.
  2. P2P Transmission: Instead of going through a central node (point), communication is done between peers (the earlier mentioned parties/computers) where each node stores and forwards information to all other nodes.
  3. Transparency with Pseudonymity: Transparency – every transaction is visible to anyone with access to the system; pseudonymity – each node on a blockchain has a unique alphanumeric address that identifies it (instead of names), and transactions occur between these addresses.
  4. Irreversibility of Records: Records of completed transactions are linked (hence, the “chain”) to every transaction record that came before them. This way, the transactions are locked, and to alter would require altering the records that came before them (before new transactions attaches to them). To add on further, various computational algorithms ensure the recording is permanent (or super-duper difficult to crack), chronologically ordered, and available to all others in the network.
  5. Computational Logic: Users can set up alogrithms and rules that automatically trigger transactions between nodes. (This feature will be explained further in the following points about application).

Now, many may still find it difficult to visualise how this works from the 5 principles, which explains why there are articles such as “Explained Like I’m 5: Blockchain”, and a video of an expert explaining blockchain in 5 different difficulty levels.

In one “Explained Like I’m 5” article, Hayley Somerville used an example of schoolchildren trying to track lunch IOUs (an informal note on who-owes-what) between each other. The problem was this particular child was owing lunches all-around after asking for bits and pieces of lunch from the other children, but did not reciprocate by offering parts of his lunch to others. Without an IOU recorded somewhere, he could get away with it. But relying on a central IOU notebook (held by a teacher who conveniently sleeps during recess), the child exploits the fallibility of the sleeping teacher to alter the records in this notebook.

The solution then is to invent an electronic IOU notebook via a mobile app used by the whole class of children, where every time a person adds an IOU, it goes to everyone’s phone at the same time (and no one can change the truth, because everyone knows the truth – the same list of all the IOUs, and which phone number the IOUs came from). Every time an IOU is added, everyone’s app will verify the IOU, and when enough of the apps agrees that the IOU is legit, the IOU is stored as a ‘block’ in everyone’s digital notebook.

The other feature is that these blocks are linked, so that no matter how many times the IOU is exchanged (let’s say A owes B, and B owes C – allowing C to claim from A), the origin of the lunch can be traced back. And to encourage participation, the whole class agrees that each time x number of IOU blocks have been verified, those apps that did the verification will get a chance for a treat.

(But seriously, how does it work?)

For a more technical way to explain blockchain, you may want to check out this video (which I previously mentioned).

How does it impact (in a good way)?

Now, the first application of blockchain technology was Bitcoin, founded by Satoshi Nakamoto (an anonymous person whose identity is still a mystery). And since then, a flood of cryptocurrencies were thus born, aimed to substitute traditional means of transacting money (that is through “central nodes” of banks). This would also mean lower transaction costs, and faster transaction speeds.

So far in this post, I have been using the word “transaction”, which may lead to confining the idea to mere financial transactions. In the world of blockchain though, simply replace the word “transaction” to “information”, and the scalability of blockchain’s application would be almost endless.

According to WIRED, biggest advocates believe that blockchains can replace central banks (through cryptocurrencies), and “usher in a new era of online services outside the control of internet giants such as Facebook and Google”. The example they cited was Storj, a startup offering file-storage service by distributing files across a decentralised network.

Also, since no single entity has monopoly over the validity of transactions (as Tony Yin pointed out), there would be no single point of failure, and that no one can cheat the system. Therefore, there is the potential of application in corporate compliance.

Going a step further into the future, WIRED pointed out that our digital identities can be tied to a token on a blockchain, in which we will use this token (that is permanent and verified to be true) to log in to apps, open banking accounts, apply for jobs and even verifying messages. And since the blockchain cannot be tampered with, there are ideas of using blockchains to even handle voting.

Other than that, blockchains can also help in automating tasks. The WIRED article used an example of a will, in which it can be stored in the blockchain (hence replacing notaries), and even be made into a smart contract to automatically execute the will and disburse money to the heirs in the will. A smart contract is a software application that can enforce an agreement without human intervention.

So far, the immense potential of blockchain’s application sounded like the future is here. However, the pace of its arrival may not be as quick as we expect.

What are the issues?

In the readings, I can generally summarise the main issues into three points:

  1. Adoption requires time

    Harvard Business Review in its article compared the adoption of blockchain to the adoption of the TCP/IP protocol. And if that protocol sounds familiar, it is because you ARE on the protocol – the internet. Bear in mind, the technology was introduced in 1972, and its first single-use case was emails among researchers on ARPAnet. We have indeed come a long way in terms of time and concerted effort, going through 4 phases as identified by the article before the protocol transforms to the internet we know today (examples in parenthesis): single use (the emails on ARPAnet), localisation (private e-mail networks within organisations), substitution (Amazon online bookshop replacing traditional brick-and-mortars), and finally transformation (Skype, which changed telecommunications).And this is what Harvard Business Review argues: blockchain, as a foundational technology like the TCP/IP, would also need to go through these 4 phases: single use (Bitcoin payments, which we now see), localisation (private online ledgers to process financial transactions, which is still pretty much in development), substitution (retailer gift cards based on bitcoin), and trnasformation (self-executing smart contracts). Furthermore, the article suggested two dimensions affecting the evolution of the two technologies: novelty (how new it is, which also mean how much effort is required to ensure the users understand what problems the tech solves), and complexity (how much coordination is required to produce value with the technology).In short, it would take a while, even with the rapid pace of technology transformation, because the users would need to take a while to cope with it.

  2. Decentralised means less-to-no control

    When cryptocurrencies gained traction (and the idea of them potnetially replacing fiat currencies gained steam), central banks are generally squeamish (or weary) given the fact that cryptocurrencies, which leverages on the blockchain, have no central banks to speak of, and hence have limited-to-none influence or control on how the cryptocurrencies behave, and its relative impact to the normal fiat currencies. And such fears are also echoed by companies who want to keep a certain amount of control on how information is kept, which leads to the next point of…

  3. Open means less-to-no privacy

    At the moment, there are several financial institutions have begun experimenting the blockchain technology (examples in the next section). But these experiments involve creating “private” blockchains which run on the servers of a single company and other selected partners. This stands in contrast with the blockchains in which Bitcoin and Ethereum operate on – anyone can view all of the transactions recorded on the network. Perhaps this would indeed be the next phase of foundational technology evolution that was spoken of earlier – localisation.But on a more futuristic level, when it comes to a point where we would actually have a digital identity on a blockchain, it would mean all of our data would be in public view to everyone. And let’s say that a nation’s government has created such a blockchain, it would try to remove the pseudonymity out of the picture in the name of national security.On a less futuristic front, there are already privacy issues raised against companies that use blockchain. Bloomberg’s article pointed out that under the European Union’s General Data Protection Regulation, companies would be required to “completely erase the personal data” upon requests of any citizens. Some blockchains’ design may even be incompatible to the said regulation.

On a technical front, some may point out the issue of preventing double spending, or a conflict about a certain transaction in the ledger. To this end, according to Tony Yin, the blockchain technology do not solve the problem, but rather the implementation does via the blockchain’s proof-of-work, or how the solution to the problem is verified. (If you are thinking, what problem needs to be solved, just remember that the blocks are encrypted with mathematical problems).

On a perceptional front, there may be concerns about hacking following several cases of cryptocurrencies and ICOs hacking (e.g. Bitcoin’s Mt. Gox and the DAO hacking mentioned in the previous ICO post). However, if you were to look deeper into these hackings, you will find that while the exchanges in the front-end suffered the attacks, the underlying technology remains intact. Thus, it is important to separate front-end interfaces from underlying technology in discussing about blockchain’s security.

How do we respond?

Indeed, there are organisations that have already begun their journey of using blockchain. According to the WIRED article, the Australian Securities Exchange announced a deal to use the blockchain technology from a Goldman Sachs-supported startup for post-trade processes in the country’s equity market. On the other hand, there are reports of JPMorgan and the Depository Trust & Clearing Corp experimenting with blockchain technology to improve efficiency of trading stocks and assets. These examples show that blockchain technology can be used to solve existing problems with slow transfers beyond payments and remittances.

Harvard Business Review suggested for company executives to “ensure their staffs learn about blockchain”, and to consider developing company-specific applications based on the 4 phases identified, and to invest in blockchain infrastructure.

On a broader level, we as a society would have to address and answer more fundamental questions that the blockchain technology poses when it reaches a greater scale, such as how do we perceive data privacy, and what does it mean to have less central control in a decentralised world.

Meanwhile, the development journey of this technology has only one way to go – up. And we have to embrace the transformation and changes that come along with that development, by getting ourselves more educated about the subject matter, and considering how we can leverage the technology to make lives better.

References

What Is Blockchain? The Complete WIRED Guide – WIRED: https://www.wired.com/story/guide-blockchain/

The Truth About Blockchain – Harvard Business Review:  https://hbr.org/2017/01/the-truth-about-blockchain

Is Your Blockchain Business Doomed? – Bloomberg: https://www.bloomberg.com/news/articles/2018-03-22/is-your-blockchain-business-doomed

Explain Like I’m 5: Blockchain (an easy explanation of the technology behind Bitcoin) – Hayley Somerville: https://www.linkedin.com/pulse/blockchain-5th-graders-hayley-somerville/

Blockchain, Bitcoin, and Ethereum ELI5 (Explained Like I’m Five) – Tony Yin: https://tonyy.in/blockchain-eli5/

Featured Image: By B140970324 (Own work) [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)%5D, via Wikimedia Commons

From What I Read: Initial Coin Offerings

Before beginning with the essay/article, I’ll give a short description of what From What I Read is about: a monthly summary write-up (in 5 main points) on several articles and publications about a certain subject topic.

My readings on the topic of Initial Coin Offerings (ICOs) are from an article each in The Edge Weekly, Blockgeeks Inc, The New York Times, and MIT Technology Review. Detailed references with links will be included at the bottom.

What is the subject about?

From the readings, several main keywords appear: “like IPO, but not shares” and “crowdfunding”. And you get the gist: ICO is a way to raise funds where investors receive digital tokens in exchange of currencies, though usually cryptocurrencies like Bitcoin and Ethereum.

ICOs are mostly issued by start-ups to fund the development of their projects, which mostly leverage on the blockchain technology. Even the ICO process uses the blockchain technology.

Anyone who has bitcoin (BTC) or ether (ETH) can invest in an ICO, which gives the crowdfunding nature of ICOs. And like an IPO, start-ups sell a portion of pre-created tokens to these early backers of their projects (while usually allocate a portion of tokens to themselves), but be not mistaken: these tokens are not shares of the start-ups, but rather digital assets that would enable the holder to access the new product from the project. This too will be elaborated further in the coming sections.

How does it work?

The following write-up assumes the reader to have some foreknowledge about cryptocurrencies and the blockchain technology. If you have not done so, you may learn about it here as well as many other sites elsewhere. UPDATE: I wrote a post on this site too.

While the concept of raising funds is similar to an IPO, an ICO and its token operates like a cryptocurrency: an ICO usually has a hard cap on number of coins in existence; the token is stored, transacted and enabled on the blockchain which verifies the transaction and serves as a record of its legitimacy; tokens can be sold and traded on cryptocurrency exchanges if there is demand for them; there are miners of these tokens to maintain the functioning of the blockchain.

In the example of Filecoin, token holders would get service like cloud-storage space, miners earn tokens for providing storage or retrieving stored data for users, and that the tokens are the method of payment for storage.

However, most ICOs do not have a complete product as of writing yet, so people buy tokens to speculate on the value of the service in the future.

Many ICOs leverage on the Ethereum blockchain since it “unleashed the power of smart contracts” with its ERC#20 standard. Prices of ICOs are usually independent of BTC and ETH despite leveraging on the blockchain behind them.

Extra: To watch how an ICO is launched, Bloomberg has a video on that.

How does it impact (in a good way)?

For start-ups, ICO is a mean to raise funds without selling stock or going to venture capitalists, which means more control for the developers throughout the project. The developers could protect the open-source nature of the eventual product since there are no owners to speak of. And since ICOs are global through the blockchain, issuers are able to raise funds globally, and has proven to raise more funds than conventional methods such as VC funds.

ICOs function as a “decentralised” enterprise which could enrich anyone who holds or mines the token upon the success of the project, and not just the executives and developers of the project. ICOs also impact the blockchain through exploring ways to connect the application of the blockchain with the token, and to leverage smart contracts to add more features to these tokens.

What are the issues/challenges?

The notoriety of ICOs stems from the fact that it is unregulated, and hence inviting many fraudsters and scammers to take advantage of the (recent as of writing) hype around cryptocurrencies to “make easy money or pull pranks”. And since there is no central authority to collect user information that is globally in ICOs, individual investors have to bear pretty much the totaility of responsibility without much legal recourse should the ICO invested failed, lost, stolen or simply ended up as a scam.

On the regulatory front, ICOs present a grey area, since some tokens are like buyer-seller relationships, while others function more like stocks. While China and South Korea have outright banned ICOs, the U.S. ruled several tokens to be regulated as a stock and to be governed under laws on securities. Increase in regulations would on the other hand increase cost and effort for the start-ups to comply to them.

ICOs are prone to hacker attacks, in which one prominent case involving $80 mil of the DAO tokens being hacked. Since the transactions are blockchain-based, they are also unfortunately irreversible. Basic coding errors can also be exploited by hackers to steal the tokens too.

Investors should also bear in mind that investing in ICOs are essentially investing in start-ups, and since most start-ups eventually fail, this would present a heightened risk for ICOs. According to one article, there are about 80-90% of all ICOs are questionable.

Furthermore, services in almost every case of ICOs have not been fully developed, and hence ICO investment is a bet on the service promised will be completed. The service promised are described in a white paper, which is where the valuation of the ICO is based upon: these start-ups may not have customers, revenue or working product when issuing an ICO.

How do we respond?

The article in The Edge Weekly basically summed up its advice in two words: caveat emptor. The burden falls upon the individual investors to do their due diligence and to assess the viability of a certain project that issues an ICO. The investor should question the rationale of the start-up to raise funds through an ICO: whether it is necessary to use the blockchain technology for the idea to work. The investor should also investigate the background of the ICO issuers, and to read the white paper of the token that funds a certain project (and even so, there is still risk that what was promised on paper and even in sample codes may be bogus).

Since hacker attacks are more prevalent on cryptocurrency exchanges, the article advises investors to use private wallets to store the ICO tokens instead of being on the exchange. Ultimately, one should have prior experience in buying and keeping cryptocurrencies before investing in ICOs.

All the readings seem to acknowledge voices that warn about ICOs being bubbles, and therefore caution should be exercised. However, this piece of innovation in fundraising seems to be staying around for the next few years, and that we as a society, whether through regulators or otherwise, should decide how we can leverage on the innovation moving forward.

References:

“Much Ado About ICO”, The Edge Weekly – Personal Wealth (19 February 2018 edition): https://www.theedgemarkets.com/article/cover-story-much-ado-about-icos

An Explanation of Initial Coin Offerings – The New York Times: https://www.nytimes.com/2017/10/27/technology/what-is-an-initial-coin-offering.html

What the Hell Is an Initial Coin Offering? – MIT Technology Review: https://www.technologyreview.com/s/608799/what-the-hell-is-an-initial-coin-offering/

What is An Initial Coin Offering? Raising Millions In Seconds – Blockgeek Inc: https://blockgeeks.com/guides/initial-coin-offering/