How the pursuit of Artificial Intelligence is changing our world.

April 25th, 2018 by Heather Maloney

The goal of achieving artificial intelligence – a computer that can learn and respond like a human – began in the 1950s(1). However it is only in the last few years that we have seen great leaps forward towards this goal. The reason for the sudden improvements is attributed to break through in an area of technology called neural networks – programming that attempts to mimic the way the brain works, and a feature of the area of machine learning.

Up until the use of neural networks and machine learning, the act of programming a computer to perform a particular task – think displaying words on a screen, adding up columns of numbers, changing an image from colour to black and white – has required that a programmer can describe in exact detail the process of achieving that task. The human brain performs many tasks, seemingly effortlessly, that are virtually impossible for anyone to describe how they are done, beyond some vague concepts and pointers in the right direction. That’s not sufficient to be able to program a machine to do the task. Consider the task of identifying one human face from another – can you describe how your loved one looks, sufficient that another person who has never met them could pick them out in a crowd with any certainty? Very difficult! This is just one example of how amazing the human brain is when it comes to rapidly processing large amounts of information. We perform many such complex tasks almost simultaneously, without even realising.

A neural network is a programmatic attempt to replicate the manner in which it is believed the brain performs complex tasks. The diagram below is a typical representation of a neural network used to carry out a particular task. As an example, consider an input being an image of a face of a person who just passed the camera, and the task to be performed by the neural network being determining whether the image is “Joe Citizen”. The first round of analysis processes the input (camera image of a face) and then passes information about that image in the form of weightings down to the next level of processing. The second level receives that analysis, performs further analysis, and then passes another set of weightings down to the next level, and so on until the end result, which is the most likely answer to the question posed at the outset (where the attributes of Joe Citizen is already known by the program)? The “hidden layers” may comprise many different layers to allow deeper and deeper analysis and greater refinement aimed towards arriving at the correct answer.

Neural network diagram

Machine learning involves allowing a computer program to learn by working through a large amount of data, which also contains the answer to a particular question e.g. data on the observations of humans who both did and did not contract a particular disease in the future. The machine learning program will build a neural network of weightings required to answer the question being posed. Then that neural network is put to work against fresh data to further refine the learning, including humans providing feedback on the program’s accuracy. Finally, armed with all that learning stored in a neural network, the program can then be applied to new, live data in order to interpret that data … it turns out, with great speed and accuracy, surpassing that of humans (1).

The above is a very simplistic description of the way neural networks operate; computer scientists involved in the use of neural networks are constantly improving their performance. Neural networks are still in relatively early days of development, and already there are many different neural network models to choose from, some better at particular problem types compared to others.

An important distinguisher in neural networks compared to “regular” programming is that the neural network can be relatively easily tuned to perform better over time, as well as “learning” from more and more data. A “regular” computer program needs to be manually reprogrammed as requirements change, again requiring someone to describe exactly what is required, and understand all the implications of that change throughout the system.

Machine learning has been applied in the last few years, with great affect, in the following areas:

  • Image / Facial recognition – ever thought about how the image search feature of Google Images, or the speedy face tag suggestions by Facebook upon upload of a photo, have become so good? A person wanted for an alleged crime in China was picked up by security cameras in about 10 minutes of the wanted person entering a concert earlier this month (3).

    City Deep Learning
  • Navigation & self-driving cars – being able to respond to incoming information, such as what other road users are doing around you, is essential for solving the problem of self-driving cars. The amount of technology involved in an autonomous car is awesome – and it needs to be given the life and death involved. “Even if it will take some time for fully autonomous vehicles to hit the market, AI is already transforming the inside of a car.” It is predicted that AI will first bring to our cars a host of so called “guardian-angel” type features to reduce the likelihood of accidents (11).
  • Speech recognition – in the last few years speech recognition (at least for native English speakers) has become very accurate, requiring very little training for a particular person. I now control my mobile phone using voice on a regular basis, because talking to my phone is much faster than typing – apparently 3 times faster according to a study by Standford University (4). Google’s latest speech-to-text system, called Tacotron 2, will add inflection to words based on punctuation to further improve understanding (5) and making it even more human-like when it is reading text to you, or responding with an answer to a question. Speech recognition in devices such as Google Home and Amazon Alexa are making simple tasks much easier. The article entitled “Amazon Echo has transformed the way I live in my apartment – here are my 19 favourite features” shows how speech recognition is being used for hands-free computer assistance in a simple home context (9). Applications of this technology are vast and life-changing for those who don’t have free hands (e.g. a surgeon at work) or are not able to type.
  • Prediction – more quickly and accurately diagnosing a current situation or predicting that a current set of information is an indicator of a future state e.g. in diagnosing disease, predicting financial market movements, identifying criminal behaviour such as insurance or banking fraud (13). The ability of a neural network to process vast amounts of data quickly, and build its own conclusions with regard to the impact of one factor on another (learn) is already helping doctors to more accurately diagnose conditions such as heart disease (12). Reducing the acceptable level of inaccuracy in medical diagnosis will lead to much better patient outcomes and reduce the cost of healthcare to our ageing population.
  • Playing games – a lot of AI research uses games to work out how to train a computer to learn. (8) From time to time I play an online version of the Settlers of Catan board game; when players leave the game (ostensibly because they have lost internet connection … usually it’s when they are losing!), you get the option to continue to game and have AI finish it on their behalf. It amuses me that I find myself, and others, immediately ganging up on the AI player. I mean, they won’t care if you make their game difficult – they’re a robot after all! It was actually the success of a computer to beat the best human players of the hardest game we play that heralded the success of artificial intelligence, and made the world take notice of its capabilities (14). “In the course of winning, [the robot] somehow taught the world completely new knowledge about perhaps the most studied and contemplated game in history.”

But, will the rise of artificial intelligence take away our jobs? Some say yes, others say no (6), but they all say that the new jobs created due to artificial intelligence will be different to current roles, and require different skills (7).

Worse than job loss, will AI cause a computer vs human war or lead to our extinction? Elon Musk is well known for his warnings against AI. It could be viewed that the pressure he has applied to the technology industry helped to lead to an agreement that the technology giants will only use AI for good (10).

I don’t believe that AI will ever result in a computer takeover of the world, because there is more that makes humans different from other animals … not just our ability to think. Reproducing just our ability to think, learn and make decisions, even in a super-human way, does not make a computer human. The capacity for machine learning / deep learning to significantly improve our lives, particularly in the areas of health and solving some of our most challenging problems, is exciting. However, I believe that it is right to be cautious; to move ahead with the knowledge that machine learning could also be used for harmful purposes. Computers can also “learn” the negative elements of humanity (15).

Business owners, innovators and leaders should consider how machine learning might be harnessed for your organisation in order to provide better value, predict more accurately, respond more quickly, or make breakthroughs in knowledge in your problem domain. Let’s harness artificial intelligence for good! Read more about “How Business (big and small) can Harness Artificial Intelligence“.

References:
(1) https://www.forbes.com/sites/bernardmarr/2016/12/08/what-is-the-difference-between-deep-learning-machine-learning-and-ai/#4cc961d726cf
(2) https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence
(3) https://www.abc.net.au/news/2018-04-17/chinese-man-caught-by-facial-recognition-arrested-at-concert/9668608
(4) https://hci.stanford.edu/research/speech/index.html
(5) https://qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/
(6) http://www.abc.net.au/news/2017-08-09/artificial-intelligence-automation-jobs-of-the-future/8786962
(7) http://www.digitalistmag.com/iot/2017/11/29/artificial-intelligence-future-of-jobs-05585290
(8) https://www.businessinsider.com.au/qbert-artificial-intelligence-machine-learning-2018-2
(9) https://www.businessinsider.com.au/amazon-echo-features-tips-tricks-2018-2
(10) https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x
(11) http://knowledge.wharton.upenn.edu/article/ai-tipping-scales-development-self-driving-cars/
(12) https://www.telegraph.co.uk/news/2018/01/03/artificial-intelligence-diagnose-heart-disease/
(13) http://bigdata-madesimple.com/artificial-intelligence-influencing-financial-markets/
(14) https://deepmind.com/research/alphago/
(15) https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Facebooktwitterredditpinterestlinkedinmailby feather

Leave a Reply

Subscribe to our monthly

Contactpoint Email News

Our enews is sent out approximately monthly, and contains information on latest digital technologies, and how these can be used to help your organisation grow.

To subscribe, simply fill in your details below: