5G: the definitive deal-breaker or just hot air of hype?

By now (as at time of writing in July November December 2019), unless you have been living under the rock for the past couple of years, you would be no stranger to the buzz about 5G telecommunication technology. In fact, as alluded in my 2018 year-ender post, 5G trials have been conducted in many places; to update on that, South Korea’s 5G services went live in April 2019 thus becoming one of the first countries to roll out 5G, while selected cities in the UK and US have seen 5G services provided by telco companies. Even more so, there are talks that the development of 6G is underway, even when 5G has yet to see mass-scale rollout.

Even so, given that there is little to know about 6G at the moment, and that 5G is still very much relevant, a brief discussion on 5G technology is therefore warranted in my opinion.

The Subject

What is 5G? Common sense and logic would make the sensible inference that 5G is an upgrade from 4G, which was an evolution from its predecessor, 3G – and so on. (After all, it is named such because it is the fifth generation of wireless network technology).

In fact, most introductory materials on 5G would at least do a short recap on how mobile wireless telecommunication technology advanced from one “G” to the other. The first generation of such technology was developed in the late 1970s and 1980s where analog radio waves carry unencrypted voice data – WIRED chronicled that anyone with off-the-shelf components could listen in on conversations.

Later on, the second “G” was developed in the 1990s which not only made voice calls safer, but provided more efficient data transfers via its digital systems. The third generation increased the bandwidth for data transfers which then allowed for simple video streaming and mobile web access – this ushered in the smartphone evolution as we know it. 4G came along in the 2010s with the exponential rise of the app economy driven by the likes of Google, Apple and Facebook.

The development of 5G was rather a longtime coming though – core specifications began in 2011, and would not complete until some 7 years later. But what does 5G entail?

The Pros and Problems

From the onset, 5G promised greater amount of connectivity speed – around 10 gigabits per second to be exact (about 1.25 gigabytes per second). At such (theoretical) speeds, they are 600 times faster than average 4G speeds on today’s mobile phones. That being said, current 5G tests and roll-outs showed that 5G speeds in practice would vary around 200 to 400 megabits per second. One test that is noteworthy would be Verizon’s 5G coverage in Chicago, where download speeds have shown to reach nearly 1.4 gigabits per second – the same may not be the case for upload speeds, and that the other major caveat concerned is the limited signal range.

5G primarily uses millimeter wave technology (with the range of the wireless spectrum between 30 GHz and 300 GHz), where it can transmit data at higher frequency and faster speeds, but also faces the major drawback of reliability in covering distances due to its extremely short wavelength and tiny wave form. With a dash of humour, this resulted in TechRadar having to perform a “5G shuffle” dance around the 5G network node when testing out the high speed in Chicago. On a more concrete note, this exemplifies the difficulty of rolling out 5G arising from the need to deploy massive amounts of network points. This meant that new network infrastructure may be required, vastly different from the existing ones which support 4G and prior network technology.

But like how we have overcome the various adversities and challenges in technology, assuming we can figure out and execute a viable deployment of 5G network (which is not a very futuristic assumption at the current rate of progress), 5G also entails greater number of devices being served at nearly real-time. This meant accelerated advancement in the field of Internet of Things, where more internet-connected devices and sensors can provide real-time data and execution – benefiting consumers and more so for various industries. This also meant autonomous vehicles, which heavily relies on real-time internet connectivity, would probably see the light of day in terms of realisation and adoption.

However, the conversations about 5G in the past couple of years have been on the “whos” and “hows” of infrastructure deployment. On this front, Chinese companies had reportedly had outpaced American counterparts in perfecting 5G network hardware capabilities. While China’s rate of technological advancement should not come as a surprise to anyone anymore, the achievements came in the light of recent concerns that these Chinese companies were involved in state-backed surveillance activities by introducing backdoors in its network equipment. The governments in the UK and US were grappling over the 5G infrastructure dilemma: to use infrastructure from Chinese companies with the risk of creating vulnerabilities to a foreign power, or to develop one’s own infrastructure which takes time and may not catch up with the economic powerhouse in time.

The other issue about the impending 5G roll-out worldwide surrounds the problem of consumer device compatibility. Similar to the times where 4G was first introduced, currently (in 2019) there are very limited number of devices with 5G network capabilities. And among those that do have such capabilities, CNET reported that these phones are limited to the millimeter wave spectrum band currently accessible for 5G, and are not open to other spectrum bands if and when they are incorporated with 5G in the future. The problem of device compatibility may evolve and resolve as 5G deployment goes on.

The Takeaways

The discussion about 5G is quite overdue, in the sense that mass roll-out of such infrastructure is underway and in the works, albeit according to the various timeframes each country has set. In fact, Malaysia is set to deploy 5G as early as second half of 2020 according to one report. It is then imperative, for industries to explore how 5G technology may be leveraged to achieve greater efficiencies and effectiveness.

And whilst 5G is just about to take off, there are already discussions about 6G, with research on the sixth generation of network technology being initiated by the Chinese government, a Finnish university, and companies such as Samsung. And going on how long each generation’s technology require to be developed, we would probably see 6G taking shape by the next decade of 2030.

As to answering the question set out in the title: is 5G a deal-breaker? Yes, but it was not in 2019, and maybe not yet in 2020. Mass infrastructure deployment must be complemented with industrial applications and use-cases in order to fully reap the potential 5G possess. But as how we have observed in previous trends of technology, this will fall in place inevitably – it is only a question of “when” and “how soon”.

With that, I shall end this post, which serves as the last post for the year (and the decade, if you are of the opinion that decades should begin with a 0 instead of 1).

And if I intend to continue writing this blog, see you in the next year (and the decade).

Robotic Process Automation: What Is It, and What It Brings

Whenever the term Robotic Process Automation (RPA) was mentioned, it is not hard to conjure images of cold, mechanical machines doing physical labour and replacing jobs in rendering human workers redundant. However, such perception could not be further from the truth, not just because of how the word “robotic” can be misleading, but also the lack in understanding what RPA is beyond the headlines.

The Subject

So what is RPA? Unlike many other topics discussed on this site here, there is one specific, official definition published by a governing authority (in this case, a diverse panel of industry participants). According to the IEEE Guide for Terms and Concepts in Intelligent Process Automation published by the IEEE Standards Association, RPA is defined as a “preconfigured software instance that uses business rules and predefined activity choreography to complete the autonomous execution of a combination of processes, activities, transactions, and tasks in one or more unrelated software systems to deliver a result or service with human exception management”.

Now, often the problem about standard definitions is that the meaning can often be lost in a sea of words. One site has actually cited this definition, and has to include a simpler analogy: software robots that mimics human action.

This, however, should not be confused with Artificial Intelligence (AI), which the same site likened to human intelligence being simulated by machines. In fact, RPA is illustrated in a lower rank than AI in a doing-thinking continuum, where RPA is more process-driven whereas AI is data-driven.

A doing-thinking continuum, with robotic process automation being in the middle-left under process driven, and artificial intelligence on the far right under data-driven.

So how does RPA works? Several sites have pointed out that RPA existed as an evolution from several technologies. Notably, the most cited technology in which RPA was evolved from is screen scraping, which is the collection of data displayed on screen usually from a legacy application to a more modern interface. Another technology cited is (traditional) workflow automation (or in this case, where a list of actions were programmed into the software to automate tasks while interacting with back-end systems through application programming interfaces (APIs) or scripting languages.

RPA, being evolved from those technologies, develops the list of actions through monitoring users performing the task in the Graphical User Interface (GUI) and then perform the automation through repeating the tasks on the GUI. Furthermore, RPA does not require a physical screen to operate as the actions would need to take place in a virtual environment.

The Pros and Cons

It’s not too hard to look at the continuum above (also called as the “Intelligent Automation Continuum”, albeit a simpler one) and relate the benefits and risks to that which have been discussed, such as Machine Learning and Artificial Intelligence. However, seeing RPA is more process-driven rather than data-driven, there would be difference in the benefits as well.

Multiple sources cited the benefit of achieving greater efficiency, as RPA is able to conduct repetitive tasks quickly around-the-clock with minimal error. With such efficiency, organisations that uses RPA may reap the benefits of cost savings from staffing, since such tasks no longer require the same number of staffing.

Some sites were more subtle on the message of reduced staffing, by pointing out that RPA may free up staff from monotonous and repetitive tasks to conduct more productive and high-value tasks that require creativity and decision making, or exploring the opportunity for people to be re-skilled and obtain new jobs in the new economy.

But just like the many other topics discussed on this site, human worker redundancy is the pink elephant in the room. According to estimates from Forrester Research, RPA software could displace 230 million or more knowledge workers, which is about 9 percent of the global workforce. Furthermore, in some cases, re-skilling displaced workers may not be within the organisational users’ consideration, since there may not be as many new jobs available for these displaced workers, not to mention that re-skilling may negate the cost saving benefits achieved. With that said, currently many organisations have already resorted to Business Process Outsourcing (BPO) for many current tasks which RPA is suited to deploy on, and hence displacement may be more serious in BPO firms.

Another benefit of RPA cited by certain sites is how RPA can be used without the need for huge customisation to systems and infrastructure. Since RPAs are generally GUI-based, they do not require deep integration with systems or alterations of the infrastructure, and are supposedly easy to implement. In fact, automation efforts can be boosted by combining RPA with other cognitive technologies such as Machine Learning and Natural Learning Processing.

That being said, RPA’s dependency on systems’ user interfaces carries a risk from obsolescence. RPA interacts with the user interface exactly as how it monitors/programmed to do, and when there are changes to the interface, the RPA would break down. And remember, RPA is also reliant to the exactness of data structures and sources, rendering RPA rather inflexible. This inflexibility is a stark contrast to how easily humans can adjust behaviour to changes as they arise.

Then there are APIs. Modern applications usually have APIs which are a more “resilient approach” in interacting with back-end systems to automate processes, relative to the brittleness RPAs had to face from the limitation described earlier. Furthermore, APIs may be seen as a more favourable option in an end-to-end straight through processing ecosystem involving multiple operating systems and environment.

The Takeaway

There are many use cases for RPA these days, that it is not exactly a new topic. Plus, with the criticism of dependency on features which may change or become obsolete, RPAs many not seem as alluring these days. In fact, some rule of thumb is to consider whether the processes could be processed straight-through with existing capabilities, before resorting to RPA.

Organisations should identify tasks that RPA may be applied and remain relevant in years to come before making the decision. Others would advise a more broad-based approach in investing automation – to consider the whole continuum instead of expecting RPA as the silver bullet to operational efficiency.

As for the redundancy problem, it has been the recurring theme in this age of digitalisation. Reiterate several posts written here, the society as a whole needs to confront with such issues and answer grave, philosophical questions concerning human jobs and roles in the future. It is an essential discourse to take place, in which is not happening enough with due significance unfortunately. And if we were to take reference from history, not doing much is simply equivalent to a Luddite approach.

Fourth Industrial Revolution vs. Industry 4.0: Same but different?

It is the new year of 2019, and initially, for the first post of the year, I would like to write about a key concept which would be an underlying tone for the year(s) ahead, and the backdrop for some of the topics I may be featuring in this blog in this year (besides having already featured some topics in 2018). The terms “Fourth Industrial Revolution” and “Industry 4.0” have become buzzwords of the century, and gone out from being mere jargons used by management consultants to widespread use in various industries.

But then as I realised, even though we have heard these two terms “Fourth Industrial Revolution” and “Industry 4.0” being interchangeably used, the equivalence might not be as strong as we may perceive.

With that, let’s get right into what entails within these terminologies.

Fourth Industrial Revolution

The term “Fourth Industrial Revolution” (4IR) was first coined by World Economic Forum founder and executive chairman Klaus Schwab in 2015, when he compared technological progress with the industrial revolutions previously happened.

Source: https://online-journals.org/index.php/i-jim/article/viewFile/7072/4532

Of course, there are various definitions and descriptions on the 4IR, but they all point to an industrial transformation “characterized by a range of new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human” as the World Economic Forum points out.

Schwab has highlighted that the 4IR is not a mere extension of the Third Industrial Revolution that spanned from the 1960s to the end of the 20th century, as it was different in velocity, scope and systems impact. He pointed that the speed of technological progress under 4IR would be exponential rather than linear, that the scope is wider with every industry being affected, and that it would be impacting systems of production, management and governance in a transformative way.

Among the technologies that were cited under 4IR are “artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing “.

Schwab has also identified the opportunities and challenges that underline 4IR. 4IR may bring about an improvement in global income levels and quality of life for people across the world through greater accessibility to affordable digital services, and the efficiencies brought by these services. Relating to the point of efficiency, businesses may stand to benefit from more effective communication and connectivity through technological innovation under 4IR.

However, 4IR presents the concerns on widening existing inequality. Given that automation and digitalisation can, if not already have, substitute and displace workers, this might “exacerbate the gap between returns to capital and return to labour”. In other words, low-skilled workers which generally come from the poorer segment of the society would increasingly face scarce job opportunities, while the owners of capital (in this case, the automation, robotic, digital systems) – mainly innovators, shareholders and investors – would be exemplifying the colloquial term of “the rich gets richer”. Considering how much of the anxieties and discontent in this current age is very much fueled by inequality, perceived or otherwise, the problem of growing inequality is certainly a growing problem.

Nonetheless, Schwab reminded that all of us are responsible for guiding this evolution, and that 4IR can be a complement to the “best of human nature”, bringing the human race to a new level of moral consciousness based on a shared sense of destiny.

Industry 4.0

So how is Industry 4.0 different from 4IR? To begin with, the source of where the term comes from is different in time and place.

The term Industry 4.0 found its origins from Germany’s Industrie 4.0, which is a high-tech strategy by the German government to promote technological innovation in product and process technologies within the manufacturing industry. The Malaysian Ministry of International Trade and Industry defines Industry 4.0 as “production or manufacturing based industries digitalisation transformation, driven by connected technologies”.

At the core of Industry 4.0 lies several foundational design principles:

  1. interconnection between people, systems and devices through Internet of Things;
  2. information transparency to provide users with plenty useful information to make proper decisions;
  3. technical assistance to support aggregation and visualisation of information for real-time decision making and problem solving, as well as to help users conduct unfeasible tasks;
  4. decentralised decisions made autonomously.

From the design principles, it entails that Industry 4.0 heavily involves specific types of technology, mainly those involved in achieving inter-connectivity and automation. Cleverism cited a research which identified the main types of technology involved under Industry 4.0, and outlined four main components: Cyber-Physical Systems, the Internet of Things, the Internet of Services and Smart Factory.

Given Internet of Things have been widely discussed (even on this site), let’s have a brief look at what the other terms entail:

  • Cyber-Physical Systems aim at integrating computation and physical processes, where physical processes are able to be monitored by devices over a network. Developing such systems would involve unique identification of objects throughout processes, the development of sensors and actuators for exchange of information, and the integration of such sensors and actuators.
  • The Internet of Services looks at how connected devices (under Internet of Things) can become an avenue of value creation (and revenue generation) for manufacturers.
  • A smart factory is a manufacturing plant that puts the aforementioned concepts together, by adopting a system that is aware of the surrounding environment and the objects within it. As the research paper mentioned, “the Smart Factory can be defined as a factory where Cyber-Physical Systems communicate over the Internet of Things and assist people and machines in the execution of their tasks”.

The benefits and challenges of Industry 4.0 would be similar to that of 4IR, with some having more specific focus on how it would impact the manufacturing industry.

One of such benefits is allowing manufacturers to offer customisation to customers. Industry 4.0 would empower manufacturers with the flexibility in responding to customer needs through inter-connectivity, Internet of Things and Internet of Services. Having the connectivity with consumer devices through Internet of Things, manufacturers would have access to consumer behaviours and needs in a seamless manner, and therefore has the potential to cater to unique consumer demands quicker than conventional go-to-market methods.

On the flip side, one greatest challenge Industry 4.0 would face is security and privacy. With great powers of connectivity comes great responsibility in ensuring data transmitted across these connections are protected. Similar to the challenges of security discussed in the Internet of Things article, the challenges of security also apply to manufacturers, and ever more so considering how processing methods are trade secrets in most manufacturing industries. On the other hand, as manufacturers increasingly assume the role as collectors of consumer data under Industry 4.0, this would increase consumers’ concern on how their data might be handled and used.

Still, despite the challenges, the future is bright for Industry 4.0 due to the promises of process efficiencies it bring, which is why the Malaysian government is seemingly convinced and committed to this technological trend, and acknowledged the need for industries to transform accordingly through its 2019 Fiscal Budget allocation for SMEs adopting automation technology.

Final Thoughts

And in conclusion, even though both terms of “Fourth Industrial Revolution” and “Industry 4.0” might have been used interchangeably, it is rather clear that these two terms have different focus. Some suggested that Industry 4.0 is a more relevant discussion on technological progress rather than the concept of 4IR, while others said that Industry 4.0 can be considered as a subset under the Fourth Industrial Revolution.

At the end of the day though, it is upon us to figure out how we should envision the future ahead with 4IR and Industry 4.0 by resolving pertinent issues surrounding personal data ethics and the future of workforce.

Featured Image credit: Christoph Roser at AllAboutLean.com

Quick Take: Tech in 2018/2019 – The Year That Was, And Is To Come

We have come to the end of the year 2018, with the new year of 2019 coming chiming in moments away. I thought it would be a great idea to take a look at tech in the past year, and stuff we could anticipate in the coming year.

But first, let’s look at what failed in the year that was.

Tech Fails in 2018

Source: 
https://www.digitalinformationworld.com/2018/12/roundup-of-2018-biggest-tech-failures.html
https://www.zdnet.com/article/the-worst-tech-failures-of-2018/
https://www.questionpro.com/blog/rise-and-fall-of-tech-innovation-2018-technology-failures/

As this site does not discuss on tech gadgets, I will be focusing on the concepts, ideas and principles in this post.

The year 2018 has shown to be a bad year for privacy, where the world has experienced a massive security breach at Facebook and Google+, with the latter to be shut down in 2019. Related to privacy as well was the hacking of Bitfi, an electronic wallet for cryptocurrency.

Speaking of cryptocurrency, it was a downhill slide throughout most of 2018 for cryptocurrencies across the board, with Bitcoin seeing a price decline of ~73% since the start of 2018. Also plaguing 2018 was downtime issues with cloud computing providers, as well as Google’s alleged involvements in creating a censored search engine for China and a warfare system using AI.

Some, having observed little progress made in 2018, would classify Internet of Things, Big Data and Virtual Reality technologies as those that did not live up to its hype in 2018.

Still, if you asked me, the real tech fails for 2018 may really arise from the U.S. Congress hearing sessions on Facebook’s Mark Zuckerberg and Google’s Sundar Pichai.

Now that we have somewhat summarised the fails, let’s move on to the slightly positive side of things.

Tech Wins in 2018

Source:
https://www.cnet.com/news/the-top-tech-stories-of-2018/
https://www.zdnet.com/article/2018-technology-trends-thatll-matter-a-decade-from-now/
https://www.recode.net/2018/12/13/18106455/best-of-2018-data-charts-tech-end-year-list-amazon-facebook-juul-moviepass-elon-musk

The following may not be actually tech wins, but it is deemed as a positive breakthrough (the word “wins” serves to juxtapose the word “fails”).

It is in 2018 that we saw Google introducing its Duplex artificial intelligence software which could make reservations and appointments over the phone while emulating speech nuances of a human person, indicating the progress in not merely natural languge processing but even to create natural language-based content. This is indeed a significant progress made in the artificial intelligence field.

We saw that cloud computing had a great year: Amazon Web Services’ partnership with VMware has gained steam and encouraged the former to lay out greater ambitions, Microsoft Azure commercial cloud services is looking at hitting $34 billion annual revenue based on its current Q1 run rate, and IBM has decided to throw its hat in the ring and raise the game with its acquisition of Red Hat.

On a greener note, electric vehicles were selling like hot cakes, with the U.S. seeing its 11-month vehicles sales higher by 57% from full-year 2017 – on the back of President Trump’s steel and import tariff. Electric scooters were also flourishing (although it did garner some hate as well).

From a more general perspective, tech companies are investing more than ever before in 2018 with record capital expenditures. The biggest companies were making various acquisitions from real estate to data centres to keep up with customer demand apart from staying competitive. And of course, it all remains to be seen what can come out of greater amount of investments, but it is an encouraging sign for consumers and the industry as a whole.

Tech Hopes in 2019

Source:
https://www.thestar.com.my/tech/tech-news/2018/12/24/unfolding-future-innovation/
https://www.forbes.com/sites/steveandriole/2018/10/22/gartners-10-technology-trends-for-2019-the-good-the-obvious-and-the-missing/

Now, I will gloss over some of the technology mentioned above which will continue its trajectory in the coming year, such as IoT and artificial intelligence. Instead, I will highlight a few interesting trends to look out for in 2019.

Right off the bat is the expected rollout of 5G network connectivity, which is expected to improve current data speeds from 4G LTE connections. This would become a catalyst in expanding IoT technology, especially in autonomous vehicles technology. In 2018, 5G connectivity trials were being conducted both abroad (e.g. Frankfurt, San Francisco) and within Malaysia (Cyberjaya and Putrajaya).

As a result of 5G connectivity, augmented reality might become a thing, having somewhat disappointed in 2018. During a tech conference in September 2018, Vodafone demonstrated the possibility of 3D holographic calls on a 5G network – of course, this is probably a gimmick unfeasible to be replicated by ordinary laymen to illustrate how much data can be streamed over 5G, but this certainly opens up opportunities on how immersive experience can be introduced once mobile data speeds are greatly improved. Other updates in the AR space includes Facebook’s announcement to add body and hand tracking features into its AR developing tools.

Augmented analytics could also see significant progress with advancements in AI and big data. For those unsure what “augmented analytics” meant, it can be understood as an approach to “automate insight generation in a company through the use of advanced machine learning and artificial intelligence algorithms”. In other words, data analysis without heavy dependency on data analysts and data scientists.

In a nutshell

To be frank, when researching about 2018 in tech, the major stories dominating the year were unfortunately in negative spotlight. Perhaps it has come to this point where the society would be challenged more than ever to consider about privacy and ethics – that is, if you can get digital natives to care.

2019 would be a challenging year for all given the growing uncertainty in the global geopolitical and economic landscape – grave changes would certainly have a knock-on effect upon the tech industry. But if we were to learn anything from recent human history, it is that technological progress would take place at its pace regardless the global circumstances – the more pertinent question would be: where, then, would it stem from?

Certain leaders would really need to be reminded that when certain areas of the world lose its global prowess, whether through regressive policies or isolationism, other places would take its place.

Nevertheless, let’s enter the new year with fresh hope and optimism – for the unknown future presents boundless opportunities. The ball now is in our court.

Sidetracked: Black Mirror: Bandersnatch, Netflix’s first interactive film for adults

Firstly, the reason behind the word “sidetracked”: I originally planned to publish a year-ender post to recap the year 2018 and preview 2019 in tech (in which the post is still being drafted), but having watched Black Mirror: Bandersnatch, I felt a sense of urgency to comment a thing or two about the show and its concept of using “Choose Your Adventure” interactive element as a way of storytelling – hence the original plan is being sidetracked.

And also the fact that writing about a show could be deemed as a sidetrack to what this site usually discusses. But I will try to somehow inject some sort of relevancy nonetheless.

For those (regrettably) uninformed, Black Mirror is a television series created by Charlie Brooker which falls under the genre of “science fiction”, “dystopian”, “satire”, “horror” and “anthology” as described by Wikipedia. The show premiered two seasons on Channel Four, and later was purchased by Netflix where the show continued for another two seasons. Season 5 is expected to be released in 2019, whereas Bandersnatch was released on 28 December 2018.

Bandersnatch is an interactive film where the audience makes decisions for the main character, Stefan Butler, a young programmer that attempts to adapt a fantasy novel into a “Choose Your Adventure” video game. You could see how “meta”-esque this film can be: viewers play a game to choose the narrative of a young programmer which designs a game allowing players to choose their own narrative.

Without trying to spoil too much of the story, I felt that this film was a critique on Black Mirror fans, which usually takes pleasure in the demise of show’s main characters. This was even explicitly proclaimed by the main character in one of the endings having experienced a “break the fourth wall” element.

This film is also taking an aim at the nature of “Choose Your Adventure” games (and now, films), where there is an illusion of free will for the player-audience. The film has multiple ways to end, but only a handful can be considered seriously as possible conclusions to the story. Certain pathways would lead to the story ending abruptly, while others will leave audience in a rather unsatisfactory end; either way, the film will offer choices for the audience to go back and alter their previous decision(s). In short, the choices offered at each point to the player-audience is an illusion; the player-audience is still subjected to the story line set out by the show’s creators and writers. And in a way, the player-audience can relate to the film’s main character when he suspects of being controlled.

Now, coming back to how a discussion of Bandersnatch may be relevant to a site that discusses about tech. It is obvious that this film is unlike most (if not any) other films due to its ability to get viewers involved actively in the progression of the show’s story. And this is made possible thanks to the show being on the online-based Netflix; this film would not happen on terrestrial television which lacks the possibility of user input and engagement due to the sheer state of infrastructure.

It is then not surprising that Bandersnatch, despite being one-of-a-kind, is not the first interactive show on Netflix: there are at least two shows for children which enables audience to choose their own adventure as well.

Could the idea of play-watching be the future of television entertainment moving forward? (And I meant beyond real-time game shows, which are a thing now.) I predict that more show creators would see this as a viable way to express creativity, although it would involved a far more complex production than conventional shows – Bandersnatch is timed at around 90 minutes, but all footage for the production reportedly clocked at over 5 hours.

Personally, as much as I enjoy the “Choose Your Adventure” show in general, I felt that the pursuit of other conclusions apart from the first conclusion reached is a confusing method of storytelling at first, especially if different endings are supposed to differ drastically from each other – you would not know which was the “true” ending intended by the showrunners.

But as far as Black Mirror: Bandersnatch is concerned, all endings lived up to what Black Mirror is known for: its psychiatric, dystopian darkness that amass a cult following comparable to other major cable TV shows.

I like how Vox have reviewed the film (and how The Verge mentioned about Reddit detectives cracking all endings and Easter eggs), but by now there would be many reviews from different sources, ranging from news sites to YouTube channels published. So go ahead and choose your own review to read. Or better yet, choose your own adventure by play-watching yourself.

From What I Did: Takeaways from My First Datathon with Data Unchained 2018 by Axiata

Some context about this post: It all started with a LinkedIn message from Phung Cheng Shyong asking whether I would be interested in participating a datathon as he was looking for teammates. As someone who was not from a data science background, the first thoughts were “What’s a datathon?” and “Pfft, are you looking for the right person?”. But on second thoughts, the event is certainly curious and interesting to an outsider. One thing led to another, and there was I, pictured with the rest of team ALSET as above, having endured a 24-hour challenge of brains while battling fatigue over time limitation.

Here are some thoughts and lessons learned from the event:

  1. To really become proficient in data science, one would need to have hands-on experience trying/working on datasets (be it for work or hobby) – tutorials alone is sorely insufficient. It is when working on datasets, and attempting to find certain insights through executing certain models, that one realises what needs to be done. Team ALSET is grateful for the sole data scientist, Cheng Shyong who has done prior data analytics work, both for work and hobby. But even so, he cannot complete the challenge with his experience alone, which is why…

  2. Stack Exchange, Kaggle and other knowledge-exchange sites for data science are all-so important, whether it is to use new analytic tools, or as a refresher on the methods and procedures previously learnt. These sites serve as a guide on how the coding work should go about especially for the new tools required, and also serve as a troubleshoot companion when the coding work was found erroneous. From here, I could see why the data science community is quite a close-knit one as a result of the openness in exchanging knowledge.

  3. A business model that exploits the prediction models is more preferable than a business model that does not. The nature of the event placed emphasis on the business relevance of the data analytics work done, which meant that teams with only technical-heavy people on-board may not necessarily be at advantage if the team falls short in effectively communicating the ways on how the models can be applied or used in a business setting. For this, team ALSET is grateful to have 2 MBA candidates from Asia School of Business, Maksat Amangeldiyev and Saloni Saraogi, to help with the business case portion of the challenge.

  4. Do not underestimate the power of sleep and naps. From this experience, I can testify that a 3-hour sleep at 4am is barely enough to take through the afternoon, especially for someone who is not a night person nor accustomed to working with less sleep. An advice from a teammate to take a short nap after lunch proved to be effective as an energy recharge to last the rest of the day.

  5. Keep an open mind, and be optimistic. Our presentation featured a short video showcasing how the proposed solutions of our business model may look like – to pull off this within the limited time frame seemed (from a personal view) impossible at first. However, Maksat and Saloni have leveraged on their resources and connections to turn this into a reality, which goes to show that ideas should not be discounted altogether at first thought. The both of them have also displayed admirable level of optimism and positivity, which was a great driver to push the team to perform even when the prospects of success seemed minute. Perhaps such optimism is one of the crucial things that defines a successful person – one that is able to become the positive energy around others even when the goings get tough.

At the end of the day, I believe that this event has provided more than just mere experience; it has provided the opportunity to meet and know different people, and to learn lessons from them.

(A shout-out to Low Yen Wei for suggesting the takeaways to be written into an article. This article is also published as an article at LinkedIn: https://www.linkedin.com/pulse/from-what-i-did-takeaways-my-first-datathon-data-unchained-yau/ )

Quick Take: Malaysia’s 2019 Budget and Technology

Recently, I received a suggestion from a reader to write on the recent Malaysia’s Government Budget for the 2019 fiscal year. I see this post as a nice break from the usual posts of explaining technology topics. After all, the Government Budget for 2019 is indeed a topic relevant to the future – so why not have a quick glance at it?

And as the title suggests, it should be quick – like 2, 3 minutes quick.

Before I go into some of the specifics, it is imperative to note that this is the first Government Budget tabled since the Pakatan Harapan government was installed in May 2018. The overarching theme in the budget is fiscal consolidation after the mismanagement of public funds by the previous government according to Finance Minister Lim Guan Eng. Notwithstanding the trade war between the US and China looming over the global economic landscape, it is certainly a challenging government budget to accommodate the multiple considerations, and subsequently to commit.

Here are the few points I have gathered:

1. A “yay” for the businesses

Now, the proposed measures would not be as fancy as the “Malaysian Vision Valley” of that sort, but companies in the tech industry has commended on the constructive measures that aimed at helping businesses, which includes a series of initiatives under the National Policy on Industry 4.0 or known as Industry4WRD (nope, that is not a spelling error – it is supposed to be pronounced as “industry forward”).

The proposed measures include:

  1. RM210 million to support businesses in transitioning to Industry 4.0, where the Malaysian Productivity Corporation will help the first 500 SMEs to undergo readiness assessment for this migration;
  2. RM2 billion under the Business Loan Guarantee Scheme (SJPP) to support SMEs in investing into automation with guarantees up to 70%;
  3. RM3 billion for the Industry Digitalisation Transformation Fund at subsidised 2% interest rate to support adoption of smart technology among businesses;
  4. RM2 billion to be set aside by government-linked investment funds to co-invest on a matching basis with private equity and venture capital funds in strategic sectors and new growth areas;
  5. RM2 billion Green Technology Financing Scheme with a 2% subsidised interest rate for the first 5 years.

Of course the full list is longer, but you get the gist of it.

2. A “yay” for infrastructure

Prior to the budget announcement, the government has already facilitated a round of price reduction for fibre internet services. However, this has drawn some flak from people living in areas without access to such services. Possibly as a response to that, the government announced allocation of RM1 billion for the National Fibre Connectivity Plan, which aimed to provide rural and remote areas internet speed of 30Mbps within five years.

3. A “huh” for the ordinary folks

Now, the “huh” word is an ambiguous expression, which probably is fitting to describe the takeaways from the Budget for the layman on the street.

On one hand, the government has announced a RM10 million allocation to Malaysia Digital Economy Corporation (MDEC) towards the development of eSports in the country. This seems like a boost to the e-gaming community and industry where this measure is perceived as a step in giving due recognition to an area that is, frankly, still riddled with stigma from certain sections of the society.

On the other, the government has announced plans to impose tax on imported online services beginning January 2020 – and yes, this includes Netflix, Spotify and Steam as shown on the Budget presentation screen. The specifics on how the tax will be imposed has yet to be announced, so some clarity in this space is required.

To add on, there is no announcement on personal tax relief for purchase of devices – presumably this may be absent from the list of tax benefits for the 2019 Tax Year.

And then there is the peer-to-peer property crowdfunding platform announced for first-time aspiring owners. There has been quite an amount of buzz surrounding the idea of crowdfunding one’s way to owning a house, with some netizens claiming it as nothing more than a glorified version of a Rent-To-Own scheme, while others describing the measure as a prelude to a subprime mortgage crisis. Now, I would not be able to comment given my employment with one of the stakeholders involved in this platform, but if anything, there should be clarity to the ordinary people on the street on the details of this initiative.

So that’s about it – a brief look on what the Malaysian 2019 Government Budget bode for Digital Malaysia.

References

Full Budget Speech: https://www.thestar.com.my/news/nation/2018/11/02/here-is-the-full-speech-by-finance-minister-lim-guan-eng-during-the-tabling-of-budget-2019/

Compilation of views from the tech space by Digital News Asia:

https://www.digitalnewsasia.com/digital-economy/budget-2019-new-technology-drive-dynamic-economy;

https://www.digitalnewsasia.com/digital-economy/pikom-welcomes-budget-incentives-growth-digital-economy

A netizen’s comment on property crowdfunding platform initiative: https://klse.i3investor.com/blogs/purelysharing/180971.jsp

Image by Andre Gunaway of Tech in Asia: https://www.techinasia.com

From What I Read: Internet of Things

I have been thinking on how I should begin this post, since this is the comeback post after a 2-3 months hiatus. And then I thought, perhaps it is a nice time to review the way I write these posts. I found myself to have fallen into a trap – one that entices the person to write a bunch of words that may end up conveying little. The content that I wrote might look too overwhelming and tedious to read, but leaving readers walking away having learned not as much.

Maybe it is time to force upon myself the KISS principle – Keep It Short and Simple. Less, when done right, could be more.

With that said, I am reviewing the format of how “From What I Read” is written, beginning with the set of questions I seek to answer. Previously, there were 5 questions: What is the subject about; how does it work/come about; how does it impact positively; what are the issues; how do we respond. Seeing that the questions may have overlapping elements, it would be better to group them to just three main items: The Subject – to address the definition and operating principles behind the subject topic; The Use Cases – to outlay functional examples and proposed ideas of applications; The Issues – to outlay problems surrounding and arising from the subject topic. For now, I would be toying with the idea of embedding the “how do we respond” component throughout the three items, but also to reiterate under a conclusion.

Secondly, the style of writing would also be reviewed. Although there are merits to an academic-style of writing, the layman audience (whom the posts are written for in mind) by and large may not be able to appreciate it. This revamp would be a harder challenge than narrowing down 5 sub-topics to 3 since writing style is something embedded in the writer – but hey, if you don’t try, you may never know. (Of course, the idea to write slightly more academically is to appropriately attribute ideas to the respective authors where I sourced it from – but I guess the readers here, if there are any, do not really care as long as there is a list of references. I will probably hyperlink where the main ideas are sourced from instead of writing the authors name here and there.) And yes, I think I should inject some casualness in the writing, just to experiment with styles.

With all things considered, let’s get started with Internet of Things (IoT).

The Subject

When it comes to IoT, some of us may have seen some cool video clips of how a futuristic home would look like: from automatic doors and windows, to automated climate control (some fancy word for air-conditioning), smart refrigerators, and now to even pre-warming your bed before your slumber (for those in temperate climates of course, beds like those would not be very welcomed under tropical weathers). Well, some of these things are really not that far off from reality, thanks to advancements in the technology of connectivity and electronics.

IoT, if I may explain in simplicity, is the enabling of devices to “communicate” to each other by being connected to the internet (or to each other). This implies the idea of controlling the turning on and off (and even fine-tuning the settings) of the devices, be it through programming where the input of the surroundings or other devices’ reaction (like cameras would be turned on after motion sensors are triggered upon detecting movements), or remotely through an external device (like smartphones) being connected to the internet (or to a private network).

So how do these IoT devices “talk” to each other, and even “instruct” to each other? The network that connects these devices set the backstage to how the communication is facilitated. Devices could be connected via Ethernet or Bluetooth (for shorter, closer range), WiFi (for medium range), or LTE and other satellite communication (for wider, larger range of coverage). Processing of data from the sensors (of the devices) will be done in the “cloud” (servers), but it is expected that as device technology develops, processing would be conducted on-device before relaying useful data back to the cloud.

The Use Cases

Currently, one of the most prominent functional examples of IoT is smart speakers, such as Google Home and Amazon’s Echo, where users could set reminders and timers, obtain information such as news, and even do online shopping via voice command. When integrated with other smart devices such as smart light-bulbs, users can further make home automation a reality. Such a case for IoT does not merely serve as fancy tricks when receiving guests at home, but is ever more meaningful to the elderly and disabled where physical movements may be limited and constrained.

Of course, IoTs may also help consumers in planning and managing resources. There are ideas where smart refrigerators may able to detect shortage in grocery supplies and alert users on restocking them (and even offer to order for them). On the other hand, smart climate control systems would able to proactively control energy usage to achieve the desired indoor temperature, which would aid in energy efficiency.

On a more macro scale, there are cities have used IoT for traffic control and management, and some for waste management. That being said, implementing and developing smart cities would require a huge sum of public investment, and the reception and adoption of IoT by the public masses. However, as autonomous vehicles increasingly becoming a reality, there will also be a significant progress in the furtherance of smart city technologies to facilitate integration with these smart cars.

Outside of consumer applications, IoT has found its place in businesses as well. There are use cases for IoT in dairy farms, where the health of livestocks are monitored. In the healthcare industries, IoT could be deployed for real-time tracking and assistance of patients from a remote location, such as to deploy assistance as soon as sensors having detected patients’ falls. On a more broader use of IoT in industries, there are use cases in the form of smart security systems and smart air-conditioning systems to provide effective and efficient control of the environment.

Coming back to consumer applications, having sensors in smart devices would aid in relaying data regarding the device’s performance back to the manufacturers. This would allow manufacturers to perform better after-sales service, such as maintenance repairs and replacements. This may enhance the value proposition offered by enterprises to consumers.

And in line with enhancing value proposition, businesses may be able to understand their customers better as they gather more data from the smart devices used, and further provide tailored solutions to the customers’ needs.

The Issues

I think this point of the post is most ideal to highlight the elephant in the room – data privacy. Since the sensors of smart devices detect users’ actions before storing and relaying data, unquestionably data of individuals would be collected somewhere – and as we discussed earlier, companies manufacturing these devices are collecting them. Of course, not all of such companies built their IoT business model on selling these data, but the phrase “not all” suggested some companies do.

Some might think that data about one’s room temperature would not too huge a matter to fuss about. But keep in mind that with multiple inputs of data combined in analysis, one’s last activity could be figured out – not something we would necessarily want a third-party to know.

This brings us to another related topic – security. Flawed IoT networks and devices could be susceptible to attacks by hackers. Remember the smart speakers mentioned earlier? While individuals may not mind trivial conversation at home being eavesdropped, a compromised smart speaker in the office setting would have serious consequences.

And then there is the dependency on the Internet to function, which poses significant concentration risk on the internet and electricity infrastructure. In a world with devices being connected to the internet, electricity and the internet itself will be rendered as “too big to fail”. In an event when electricity and the internet “does fail”, the outcome could range from being annoyed by non-functioning household appliances, to being “imprisoned” by non-functioning smart locks.

All these concerns aside, we must be cognisant that IoT would be dependent on availability of high-speed internet, and that they would be taking up a lot of bandwidth from the broadband. This presents a two-fold problem: one, we may need to partition part of the high-speed internet some of us currently enjoy for the smart devices to transmit data to the cloud; two, not all of us would have reliable access to high-speed internet at this point in time, nor in the foreseeable future. Thus, until broadband services is made affordable and accessible to more people, IoT would struggle to take off into mass adoption.

The Takeaway

The use cases and issues highlighted were probably a mere tip of an iceberg of how IoT could impact our everyday lives, providing us indicators on how we may need to transform the way we do things currently in the journey to widespread IoT adoption.

And like most other topics highlighted previously, the recurring theme of concern is personal privacy – how much are we, as a society, would be willing to sacrifice privacy (which constitutes personal liberty and freedom) for convenience?

Or perhaps, such a question may soon lose its relevance to a generation of people that are born with smart devices and FB-enabled services where we are increasingly indifferent to sharing data of our digital identity in exchange of the convenience from filling up an entire sign-up form.

References

What is the Internet of Things? WIRED explains by Matt Burgess: https://www.wired.co.uk/article/internet-of-things-what-is-explained-iot

What is the IoT? Everything you need to know about the Internet of Things right now by Steve Ranger: https://www.zdnet.com/article/what-is-the-internet-of-things-everything-you-need-to-know-about-the-iot-right-now/

Your terrible broadband will kill the Internet of Things dead by Steve Ranger: https://www.zdnet.com/article/your-terrible-broadband-will-kill-the-internet-of-things-dead/

A Simple Explanation of ‘The Internet of Things’ by Jacob Morgan: https://www.forbes.com/sites/jacobmorgan/2014/05/13/simple-explanation-internet-things-that-anyone-can-understand/#72f188651d09

What Is the Internet of Things? by Fergus O’Sullivan: https://www.cloudwards.net/what-is-the-internet-of-things/

Smart cities: A cheat sheet by Teena Maddox: https://www.techrepublic.com/article/smart-cities-the-smart-persons-guide/

The Smart Way To Build Smart Cities by HBS Working Knowledge: https://www.forbes.com/sites/hbsworkingknowledge/2018/04/04/the-smart-way-to-build-smart-cities/#31df532b7b19

P.S. By the way, part of the reason why I chose to write about IoT is because of the upcoming Axiata Data Unchained 2018 datathon, in which I would be participating in – will try to document and put into a future post.

From What I Read: Deep Learning

(If you came because of the Bee Gees’ excerpt, congratulations – you’ve just been click-baited.)

Recently, I came across a video on my Facebook news feed, which showed several Hokkien phrases used by Singaporeans – one of which was “cheem”, literally “deep” in English. It is usually used to describe someone being very profound or complex, usually in content and philosophy.

I perceive that despite the geographical differences, there is somewhat a common understanding between the East and the West on the word “deep”. The English term “shallow” means simplistic apart from the lack of physical depth, and so is the phrase “skin deep”.

Of course, the term “deep learning” (DL) does not simply derive from the word “deep” being complicated, but certainly the method of DL is nothing short of being complex.

For this post, I would do the write-up in a slightly different manner – an article of reading will be the “anchor” article in answering each section, and then readings of other articles will be added on to the foundation laid by the “anchor”. For those who pay attention, you would notice the pattern.

My primarily readings will be from the following: Bernard Marr (through Forbes), Jason Brownlee, MATLAB & Simulink, Brittany-Marie Swanson, Robert D. Hof (through MIT Technology Review), Radu Raicea (through freecodecamp.org), and Monical Anderson (through Artificial Understanding and a book by Lauren Huret). As usual, the detailed references are included below.

What is the subject about?

(Now before I go into the readings, I wanted to bring back to how the term “deep learning” was first derived. It was first appeared in academic literature in 1986, when Rina Dechter wrote about “Learning While Searching in Constraint-Satisfaction-Problems” – the paper introduced the term to Machine Learning, but did not shed light on what DL is more commonly known today – neural networks. It was not until the year 2000 that the term was introduced to neural network by Aizenberg & Vandewalle.)

Tracing back to my previous posts, DL is a subset of Machine Learning, which itself is a subset of Artificial Intelligence. Marr pointed out that while Machine Learning took several core ideas of AI and “focuses them on solving real-world problems…designed to mimic our own decision-making”, DL puts further focus on certain Machine Learning tools and techniques in applying to solve “just about any problem which requires “thought” – human or artificial”.

Brownlee offered a different dimension to the definition of DL: “a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks”. This definition offered was supported by several more researchers cited in the article, some of them are:

  • Andrew Ng (“The idea of deep learning as using brain simulations, hope to: make learning algorithms much better and easier to use; make revolutionary advances in machine learning and AI”)
  • Jeff Dean (“When you hear the term deep learning, just think of a large deep neural net. Deep refers to the number of layers typically…I think of them as deep neural networks generally”)
  • Peter Norvig (“A kind of learning where the representation you form have several levels of abstraction, rather than a direct input to output”)

The article as a whole was rather academic in nature, but also offered a simplified summary: “deep learning is just very big neural networks on a lot more data, requiring bigger computers”.

The description of DL as a larger-scale, multi-layer neural network was supported by Swanson’s article. The idea of a neural network mimicking a human brain was reiterated in Hof’s article.

How does it work?

Marr described how DL works as having a large amount of data fed through “logical constructions asking a series of binary true/false questions, or extract a numerical value, of every bit of data which pass through them, before classifying them according to the answers received” known as neural networks, in order to make decisions about other data.

Marr’s article gave an example of a system designed to record and report the number of vehicles of a particular make and model passing along a public road. The system would first fed with a large database of car types and their details, of which the system would process (hence “learning”) and compare with data from its sensors – by doing so, the system could classify the type of vehicles that passed by with some probability of accuracy. Marr further explained that the system would increase that probability by “training” itself with new data – and thus new differentiators – it receives. This, according to Marr, is what makes the learning “deep”.

Brownlee’s article, through its aggregation of prior academic researches and presentations, pointed out that the “deep” refers to the multiple layers within the neural network models – of which the systems used to learn representations of data “at a higher, slightly more abstract level”. The article also highlighted the key aspect of DL: “these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure”.

Raicea illustrated the idea of neural networks as neurons having grouped into three different types of layers: input layer, hidden layer(s) and output layer – the “deep” would refer to having more than one hidden layer. The computation is facilitated by connections between neurons that are associated with a (randomly set) weight which dictates the importance of the input value. The system would iterate through the data set and compare the outputs to see how much it is far off from the real outputs, before readjusting the weights between neurons.

How does it impact (in a good way)?

Marr cited several applications of DL that are currently deployed or under work-in-progress. DL’s use-case in object recognition would enhance the development of self-driving cars, while DL techniques would aid in the development of medicine “genetically tailored to an individual’s genome”. Something closer to the layman and Average Joe, DL systems are empowered to analyse data and produce reports in natural-sounding human language with corresponding infographics – this could be seen in some news reports generated by what we know as “robots”.

Brownlee’s article did not expound much on the use-cases. Nevertheless, it highlighted that “DL excels on problem domains where the input (and even output) are analog”. In other words, DL does not need data to come in numerical and in tables, and neither should the data it produces – offering a qualitative dimension to analysis as compared to conventional data analysis.

Much of the explicit benefits were discussed in the prior posts on Machine Learning and Artificial Intelligence.

What are the issues?

Brownlee recapped the prior issues of DL in the 1990s through Geoff Hinton’s slide: back then, datasets were too small, computing power was too weak, and generally the methods of operating it were improper. MATLAB & Simulink pointed out that DL became useful because the first two factors of failures have seen great improvements over time.

Swanson briefly warned on the issue of using multiple layers in the neural network: “more layers means your model will require more parameters and computational resources and is more likely to become overfit”.

Hof cited points raised by DL critics, chiefly on how the development of DL and AI in general have deviated away from putting into consideration how an actual brain functions “in favour of brute-force computing”. An example was captured by Jeff Hawkins on how DL failed to take into consideration the concept of time, in which human learning (which AIs supposed to emulate) would depend on the ability to recall sequences of patterns, and not merely still images.

Hof also mentioned that current DL applications are within speech and image recognition, and to extend the applications beyond them would “require more conceptual and software breakthroughs” as well as advancements in processing power.

Much of other DL’s issues were rather similar to those faced by Machine Learning and Artificial Intelligence, in which I have captured accordingly in the previous posts. One of the recurring themes would be how inexplicable DL systems get to its output, or in the words of Anderson’s article, “the process itself isn’t scientific”.

How do we respond?

Usually, I would comment in this section with very forward-looking, society-challenging calls for action – and indeed I have done for the post on AI and Machine Learning.

But I would like to end with a couple of paragraphs from Anderson in a separate publication, which captured the anxiety about AI in general, and some hope for DL:

“A computer programmed in the traditional way has no clue about what matters. So therefore we have had programmers who know what matters creating models and entering these models into the computer. All programming is like that; a programmer is basically somebody who does reduction all day. They look at the rich world and they make models that they enter into the computer as programs. The programmers are intelligent, but the program is not. And this was true for all old style reductionist AI.

… All intelligences are fallible. That is an absolute natural law. There is no such thing as an infallible intelligence ever. If you want to make an artificial intelligence, the stupid way is to keep doing the same thing. That is a losing proposition for multiple reasons. The most obvious one is that the world is very large, with a lot of things in it, which may matter or not, depending on the situations. Comprehensive models of the world are impossible, even more so if you considered the so-called “frame problem”: If you program an AI based on models, the model is obsolete the moment you make it, since the programmer can never keep up with the constant changes of the world evolving.

Using such a model to make decisions is inevitably going to output mistakes. The reduction process is basically a scientific approach, building a model and testing it. This is a scientific form of making what some people call intelligence. The problem is not that we are trying to make something scientific, we are trying to make the scientist. We are trying to create a machine that can do the reduction the programmer is doing because nothing else counts as intelligent.

… Out of hundreds of things that we have tried to make AI work, neural networks are the only one that is actually going to succeed in producing anything interesting. It’s not surprising because these networks are a little bit more like the brain. We are not necessarily modeling them after the brain but trying to solve similar problems ends up in a similar design.”

Interesting Video Resources

But what *is* a Neural Network? | Chapter 1, deep learning – 3Blue1Brown: https://youtu.be/aircAruvnKk

How Machines *Really* Learn. [Footnote] – CGP Grey: https://www.youtube.com/watch?v=wvWpdrfoEv0

References

What Is The Difference Between Deep Learning, Machine Learning and AI? – Forbes: https://www.forbes.com/sites/bernardmarr/2016/12/08/what-is-the-difference-between-deep-learning-machine-learning-and-ai/#394c09c726cf

What is Deep Learning? – Jason Brownlee: https://machinelearningmastery.com/what-is-deep-learning/

What Is Deep Learning? | How It Works, Techniques & Applications – MATLAB & Simulink: https://www.mathworks.com/discovery/deep-learning.html

What is Deep Learning? – Brittany-Marie Swanson: https://www.datascience.com/blog/what-is-deep-learning

Deep Learning – MIT Technology Review: https://www.technologyreview.com/s/513696/deep-learning/

Want to know how Deep Learning works? Here’s a quick guide for everyone. – Radu Raicea: https://medium.freecodecamp.org/want-to-know-how-deep-learning-works-heres-a-quick-guide-for-everyone-1aedeca88076

Why Deep Learning Works – Artificial Understanding – Artificial Understanding: https://artificial-understanding.com/why-deep-learning-works-1b0184686af6

Artificial Fear Intelligence of Death. In conversation with Monica Anderson, Erik Davis, R.U. Sirius and Dag Spicer – Lauren Huret: https://books.google.com.my/books?id=H0kUDAAAQBAJ&dq=all+intelligences+are+fallible&source=gbs_navlinks_s

From What I Read: Machine Learning

Let’s be up front here: my introduction section for the Artificial Intelligence post stole quite a lot of limelight from the remaining posts within the AI series (since this post is the subset of AI, and the next post is possibly the subset of this post), so I will not bother to think too much about coming up with an introduction with a “bang”.

The other disclaimer here being that this post was not how I envisioned a month ago. This is mainly because as I search out on the topic further, there are more and deeper ways to explain the topic (and did I say varied too?). And this extends beyond reading materials – there are several kinds of videos out there which aims to explain the subject (in varied edutainment levels).

But in the interest of time, and effort, I will try to be rather layman-ish and bare-bone-ish in the approach in handling the subject of Machine Learning. But I will include links to resources I find interesting (and may not end up being used for this post) at the end of this post.

My readings for this post in from Bernard Marr on Forbes, MATLAB & Simulink, Experts System, Yufeng Guo on Towards Data Science, Danny Sullivan on MarTech Today, SAS and several Quora replies.

What is the subject about?

So what is machine learning (ML)?

It is rather widely acknowledged that ML is a subset of Artificial Intelligence (AI), and so, from a concept level, it would bear similarities to the goals of AI: to mimic humans’ intelligence as machines. On a subset level, as Marr mentioned in his article, ML seeks to “teach computers to learn in the same way we do” through interpreting information, classifying them, and learn from successes and failures.

Such a description is concurred by an article from MATLAB & Simulink (M&S), which stated that ML is a “data analytics technique that teaches computers to…learn from experience”, even adding that this learning method “comes naturally to humans and animals”.

So what does “learning from experience” and “learning from successes and failures” underline? They imply the absence of explicit programming from a programmer, as Experts System’s (ES) article explained, and further added the idea of automation in the learning process.

Guo took a different approach by defining ML as “using data to answer questions”, outlining the idea of training from an input (“data”) and the outcome of making predictions or inferences (“answer questions”). Guo further mentioned that the two parts in the definition is connected by analytical models, in which SAS’ article also highlighted.

To conclude this section, we can connect the two approaches of defining ML, sloppily amalgamate as “a data analytics technique that teaches computers to learn automatically through experiences by using data, ultimately to answer questions through inferences and predictions”.

How does it work?

In explaining how ML works, many of the articles in review would mention the two types of techniques under ML, namely supervised learning and unsupervised learning.

As M&S’ article puts it, supervised learning develops predictive models based on both input and output data to predict future outputs. Such examples of application include handwriting recognition (which leverages on classification techniques like discriminant analysis and logistic regression) and electricity load forecasting (which uses regression techniques like linear and nonlinear model and stepwise regression).

Unsupervised learning seeks to find hidden patterns or intrinsic structures in input data through grouping and interpreting the data – there is no output data involved. This type of technique is usually used for exploratory data analysis, and would see applications in object recognition, gene sequence analysis and market research. The M&S’ article cited the clustering technique as the most common unsupervised learning technique, which uses algorithms the likes of hierarchical clustering and hidden Markov models. In short, unsupervised learning is good for splitting data into clusters.

ES added several dimensions to the types of techniques and ML algorithms to shed more light on how ML works, namely on semi-supervised ML algorithms (falling between supervised and unsupervised learning which uses labeled (input data with accompanying output data) and unlabeled data for training to improve learning accuracy) and reinforcement ML algorithms (interacting with the environment by producing actions and discovering errors or rewards to determine the ideal and optimised behavior within a specific context).

img_0934-800x469

Sullivan’s article mentioned about the three major parts that makes up ML systems, namely the model (the system that makes predictions/identifications), the parameters (the factors used by the model to produce decisions) and the learner (the system that adjusts the parameters and subsequently the model by looking at differences in predictions versus actual outcome.

Such a way to explain the workings of ML systems bears similarity to how CGP Grey explains in his video which I find rather interesting.

Guo outlined 7 steps of ML in his separate article:

  1. Data gathering
  2. Data preparation
  3. Model selection
  4. Model training
  5. Model evaluation
  6. (Hyper)Parameter Tuning
  7. Model prediction

Most of the steps are pretty much similar to how Sullivan’s article described and implied, including the step of training, in which Sullivan described as the “learning part of machine learning” and “rinse and repeat” – these process would reshape the model to refine the predictions.

Again, this is not a technical post, so I would spare you with too much details. I would, however, include links to a few videos for you to watch should you be interested.

And finally, on Quora, there is also a response to break down how ML works on a very relatable manner – that machines are trying to do how we are doing tasks, but with infinite memory and speed of handling millions of transactions every second.

How does it impact (in a good way)?

Many of us would have been experiencing the applications of ML unknowingly everyday. Take YouTube’s Video Recommendations system, which relies on algorithms and the input data – your search and watch history. The model is further refined with other inputs such as the “Not interested” button you clicked on some of their recommendations, and percentage of video watched (perhaps).

And speaking of recommendations, how can we not include the all-too-famous Google search engine and its results recommendations? And speaking of Google, how can we not bring to mind their Google Translate feature which allows users to translate languages through visual input?

So certainly, the use case for ML is quite prevalent in these areas that the public-at-large is familiar of.

M&S outlined several other areas where ML has become a key technique to solve problems, such as credit scoring in assessing credit-worthiness of borrowers, motion and object detection for automated vehicles, tumour detection and drug discovery in the field of biology, and predictive maintenance for manufacturing.

SAS’ article highlighted that ML could enable for faster and more complex data analysis with better accuracy, while also being able to process large amount of data coming from data mining and affordable data storage.

And when ML is able to do certain tasks which would have required humans to do in the past, that would mean cost savings for the enterprises involved. This though provided a nice segue way to the next section.

What are the issues?

Now, call me lazy if you want, but as I have mentioned earlier: since ML is a subset of AI, there are several issues AI faced that would be faced by ML, such as the problem of input data quality (both accuracy and biaslessness), and the difficulty in explaining how the model may reached to its conclusion (especially if it involved deploying neural network technique).

To also reiterate from the previous post, we may foresee jobs being displaced as tasks can be increasingly automated and taken over more efficiently by ML systems. That being said, the half-glass-full view of the situation is that the job functions have been augmented and changed – if we could get the workforce to adapt to these job functions, the impact could be minimised.

As ML becomes widely adopted, there would be a greater demand of skilled resources. This sounded like a solution to the half-glass-full view mentioned, but seeing that the field of ML technology is still relatively new, it would probably mean higher cost and difficulty in acquiring expertise in ML, let alone to train the existing workforce.

And as ML become increasingly and widely used, the hunger for data would become more insatiable. We as a society may increasingly find ourselves to address the question on how much personal data should we be sharing as Doromal writes in his Quora reply to a question.

But to get to wide adoption, there is a need for the democratisation of ML, since presently investments in ML can be hefty, and hence the exclusivity of ML whereby more advanced systems would be available to users that could afford.

How do we respond?

My answer to this question would not run far from what was posed in the post on AI. But to add on to that, as I have mentioned in the earlier section, we as a society would need to take a hard look at how do we perceive data privacy, since ML is dependent on the availability of data to form better predictions and inferences.

There is growing interest among companies on ML upon seeing the benefits it can reap. Perhaps through the high demand that there will be a greater push in the development and subsequent democratisation of the technology. That said, companies need to find the balance between deployment of ML and managing their workforce which may be increasingly redundant.

The teaching and learning of ML should become more widespread to meet the increased need of such skilled workforce, while a better level of awareness about ML among individuals of the society would also be needed in the future to come in order to understand how certain automated decisions they will face are derived from.

ML is unlike the other topics mentioned in this blog, in that the technology is already here and now today, up and running (while things like ICO and even the commercial use of blockchain is still yet to be seen). And as implied and mentioned, the applications have already become rather prevalent today. Individuals in the society however are still probably some way off from having a good understanding about ML, but that would probably be changed soon as widespread automation increasingly creeps and looms on the horizon.

Interesting video resources

How Machines Learn – CGP Grey: https://youtu.be/R9OHn5ZF4Uo

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34 – CrashCourse: https://www.youtube.com/watch?v=z-EtmaFJieY

What is Machine Learning? – Google Cloud Platform: https://www.youtube.com/watch?v=HcqpanDadyQ

But what *is* a Neural Network? | Chapter 1, deep learning – 3Blue1Brown: https://youtu.be/aircAruvnKk

References

What Is Machine Learning – A Complete Beginner’s Guide In 2017 – Forbes: https://www.forbes.com/sites/bernardmarr/2017/05/04/what-is-machine-learning-a-complete-beginners-guide-in-2017/#43fbce66578f

What Is Machine Learning? | How It Works, Techniques & Applications – MATLAB & Simulink: https://www.mathworks.com/discovery/machine-learning.html

What is Machine Learning? A definition – Expert System: https://www.expertsystem.com/machine-learning-definition/

How Machine Learning Works, As Explained By Google – MarTech Today: https://martechtoday.com/how-machine-learning-works-150366

How do you explain Machine Learning and Data Mining to non Computer Science people? – Quora: https://www.quora.com/How-do-you-explain-Machine-Learning-and-Data-Mining-to-non-Computer-Science-people

Machine Learning: What it is and why it matters | SAS: https://www.sas.com/en_my/insights/analytics/machine-learning.html

What is Machine Learning? – Towards Data Science: https://towardsdatascience.com/what-is-machine-learning-8c6871016736

The 7 Steps of Machine Learning – Towards Data Science: https://towardsdatascience.com/the-7-steps-of-machine-learning-2877d7e5548e

5 Common Machine Learning Problems & How to Beat Them – Provintl: https://www.provintl.com/blog/5-common-machine-learning-problems-how-to-beat-them

What are the main problems faced by machine learning engineers at Google? – Quora: https://www.quora.com/What-are-the-main-problems-faced-by-machine-learning-engineers-at-Google

An Honest Guide to Machine Learning: Part One – Axiom Zen Team – Medium: https://medium.com/axiomzenteam/an-honest-guide-to-machine-learning-2f6d7a6df60e

These are three of the biggest problems facing today’s AI – The Verge: https://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks