Archive for the ‘Technology’ Category

How can Business (big and small) Harness Artificial Intelligence?

April 30th, 2018 by Heather Maloney

Futuristic city with delivery person sending off drones with packages from skyscraper
A recent Contactpoint blog post described the way in which artificial intelligence and machine learning are impacting our world at large and how it works. This blog attempts to answer the question “How can business, both small and large, utilise AI to make significant advancements?” AI is certainly not a technology only available for large corporations.

I assert that there are 3 main ways that your business can benefit from artificial intelligence (‘AI’):

  1. By integrating your website / app with software that has been improved by the use of AI. Such integration will significantly improve the value provided by your solution.
  2. By using software that has been improved by AI for running your business, thus significantly improving the manner in which you run your business, on an ongoing basis.
  3. By running your own deep learning exercises to determine the answer to a difficult question, which either improves your business performance or your understanding of your clients.

I expect that you already, perhaps unknowingly, use the outcome of AI or machine learning every day. Understanding it will help you harness it even more, so let’s explore just a few examples of each of these opportunities.

Integration
The Google search engine is underpinned by the use of AI – the more web pages it crawls, the better and better it gets at providing people with valid and useful search results. That’s part of the benefit of AI; traditional programming requires modification over time in response to the way people use it, with AI driven solutions, they learn and improve on their own.

Baidu, the so-called Chinese version of Google, allows you to upload an image, and request “similar images”. The search for similar images is not based on text around the images on a web page but solely uses the content of the images (2). Images, in technology terms, are made up of pixels of colour, which individually tell you very little. It is the manner in which the colours are combined, and the hard and soft edges around groups of pixels, which determine what is actually represented. AI underpins Baidu’s ability to find similar images – a very complex problem, and probably not something you could program a computer to perform. Traditionally the ability for a programmer to tell a computer how to achieve a goal was a prerequisite to solving that goal programmatically. With the use of AI, instead of telling the computer how to solve the problem, the program is allowed to train itself to solve the problem, getting better and better at achieving a task the more times it is performed.

We all use text search to find the things we need in Google or other search engines. It’s been possible to integrate the Google Search Engine into your website or app for many years, including restricting the search results to a particular domain or set of domains, thus providing excellent search results to your visitors without needing to write a search engine algorithm yourself. The ability to also search by images may be the differentiator that your website or app needs to deepen the value for your customers.

Other AI enriched applications that may enhance your application include:

  • Voice recognition – for speech to text and voice control of your app.
  • Language translation.
  • Image recognition e.g. Facebook suggesting name tags for people in photos you upload.
  • Route planning e.g. navigating from one place to another, taking traffic and other factors into consideration.

Clickup.com, a project management software, provides another example of integration. They announced this month that Clickup is now integrated with Alexa and Google Talk, allowing users to quickly interact with the online software by voice (3).
Google and Microsoft allow you to play with some of their AI enhanced functions via websites (4).

Operations
There are many functions that all businesses carry out. These functions are attracting the application of AI in order to make the tools used to complete these tasks, exponentially better than they have been before, and thereby attract new business.

Keeping up with the last news in your industry during your morning commute is now so much easier thanks to tools such as Voice Aloud which enables your smart phone to read an article to you while you drive (carefully of course). Your smart phone will also allow you to search using voice commands, using Google Voice Assistant or the iPhone Siri, allowing you to search hands free.
I recently asked my Android phone “Okay Google, what do I have on today?” expecting to have a list of my appointments read to me – it did that and, then started playing me 2 – 3 minute snippets of daily news recorded by various news agencies around Australia. It was a fantastic way to keep up-to-date and it “learned” that behaviour all on its own.

Google Search enables you to find relevant information, and this search is very accurate, powered by AI. It’s very important for Google’s revenue from online advertisements that Google Adwords provides relevant ads to searchers, because it is the relevance that inspires people to click on an ad, thus earning Google revenue. Similarly ads which appear in amongst Facebook news feeds are very reliable for showing your ad to the right audience, and once you have achieved excellent click through the result of Facebook’s AI research ensures that it will promote your ad to “look-a-like” audiences, based on what it knows about the people who already clicked. You can now much more confidently spend money on pay-per-click, because you can tailor your ads to specifically targeted audiences.

In a recent Contactpoint blog we talked about chatbots – the best of these are underpinned by AI, improving their results the more they are used so that they can help answer an inbound question before the human gets involved.

A number of online customer service and customer relationship management tools are now underpinned by AI. In these functions AI is bringing valuable insights as you use the tools, such as:
- Which clients are at the greatest risk of leaving you? (5)
- Which phrases and styles of interacting with customers produce the greatest sales results?
- What are the most important additional products or services to provide to your customers?

The better banking and financial management tools are now underpinned by AI to help you identify fraud (6). Similarly computer networks are being better secured from intrusion, viruses and malware now by solutions that use AI to detect unusual behaviour (7).

If your operations involve designing products and engineering, AI is making great inroads into design tools to help speed up the process (12).

Actionable Insights / Solving Problems
So far we have considered AI lead improvements to more general problems. Your business will be operating in a particular domain in which you are an expert, and in which there are very specific problems that have not yet been solved, or can’t be solved quickly & reliably for a large number of customers. This is where the power of AI may be the most potent, because machine learning / deep learning can be used to arrive at breakthroughs in your particular domain. Whilst it helps if you have lots of data in order to feed the deep learning process, for smaller businesses, you may be able to access public data to achieve the same goal, or use pre-existing neural networks to solve your similar problem.

Tools such as Chorus.ai are ready to take your organisation’s live data, in order to provide you with valuable insights in a specific operational area (8). In the case of Chorus.ai it analyses your meetings, particularly sales meetings, to help you get the best performance out of future meetings.

AI is being used to great effect by large corporations such as Walmart to quickly respond when high turnover products look like running out of stock, recently reporting a year-on-year 63% increase in sales (10).

Smaller organisations are also using AI to gain actionable insights, including a Zoo which now has a much better accuracy in predicting high attendance, and therefore staffing requirements, based on using AI to determine all the factors (not just weather) that increase visitor rates (10).

Domo is a tool created to help businesses, small and large, collate data from a wide range of sources (social media, ecommerce, chat bots etc), and help an organisation spot trends in real time (11).
In the area of product design and engineering, a concept called Generative Design underpinned by AI, is enabling faster design and many more possible designs to choose from by allowing all the constraints of the product to be entered, and then allowing the program to generate a large number of possible solutions (13).

However, for a problem more specific to your industry or expertise, you may need to perform a highly customised deep learning experiment. Once you have determined the question you need to answer for your specific area of expertise and industry, there are 7 steps in performing your own AI or deep learning experiment:

  • Gathering data
  • Preparing the data
  • Choosing an AI model to suit your question / domain
  • Training the model with data which contains the results / answers
  • Evaluation of the performance of the model
  • Tuning of the factors determined by the model
  • Applying the model to fresh data in order to gain insights or greater performance. (1)

As a business owner or leader you should be considering the way in which artificial intelligence or machine learning can change the way in which you operate and solve your customer’s problems. Don’t hesitate to get in contact if you would like to discuss how AI can be put to work for your organisation.

Read this article to understand more about how machine learning works, and how artificial intelligence is impacting our world.

If you would like to discuss how AI might benefit your organisation, please don’t hesitate to get in touch with Heather Maloney.

References:
(1) https://towardsdatascience.com/the-7-steps-of-machine-learning-2877d7e5548e
(2) https://www.wired.com/2013/06/baidu-virtual-search/
(3) https://clickup.com/blog/alexa-google-assistant-project-management/
(4) https://aidemos.microsoft.com/ & https://experiments.withgoogle.com/ai
(5) http://www.bizdata.com.au/customer-smartdetect?gclid=CjwKCAjwlIvXBRBjEiwATWAQIseaubDdIaMGue_bjuC9BZU3WLErB1Qj9u12XKmkZGZPz-UGO4balhoCaxgQAvD_BwE
(6) https://www.techemergence.com/machine-learning-fraud-detection-modern-applications-risks/
(7) https://www.techemergence.com/network-intrusion-detection-using-machine-learning/
(8) https://www.chorus.ai/product/
(9) https://www.morganmckinley.com.au/article/how-ai-helping-small-business-today
(10) https://www.clickz.com/5-businesses-using-ai-to-predict-the-future-and-profit/112336/
(11) https://www.techemergence.com/ai-in-business-intelligence-applications/
(12) https://www.engineersrule.com/solidworks-puts-artificial-intelligence-work/
(13) https://www.autodesk.com/solutions/generative-design

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

How the pursuit of Artificial Intelligence is changing our world.

April 25th, 2018 by Heather Maloney

The goal of achieving artificial intelligence – a computer that can learn and respond like a human – began in the 1950s(1). However it is only in the last few years that we have seen great leaps forward towards this goal. The reason for the sudden improvements is attributed to break through in an area of technology called neural networks – programming that attempts to mimic the way the brain works, and a feature of the area of machine learning.

Up until the use of neural networks and machine learning, the act of programming a computer to perform a particular task – think displaying words on a screen, adding up columns of numbers, changing an image from colour to black and white – has required that a programmer can describe in exact detail the process of achieving that task. The human brain performs many tasks, seemingly effortlessly, that are virtually impossible for anyone to describe how they are done, beyond some vague concepts and pointers in the right direction. That’s not sufficient to be able to program a machine to do the task. Consider the task of identifying one human face from another – can you describe how your loved one looks, sufficient that another person who has never met them could pick them out in a crowd with any certainty? Very difficult! This is just one example of how amazing the human brain is when it comes to rapidly processing large amounts of information. We perform many such complex tasks almost simultaneously, without even realising.

A neural network is a programmatic attempt to replicate the manner in which it is believed the brain performs complex tasks. The diagram below is a typical representation of a neural network used to carry out a particular task. As an example, consider an input being an image of a face of a person who just passed the camera, and the task to be performed by the neural network being determining whether the image is “Joe Citizen”. The first round of analysis processes the input (camera image of a face) and then passes information about that image in the form of weightings down to the next level of processing. The second level receives that analysis, performs further analysis, and then passes another set of weightings down to the next level, and so on until the end result, which is the most likely answer to the question posed at the outset (where the attributes of Joe Citizen is already known by the program)? The “hidden layers” may comprise many different layers to allow deeper and deeper analysis and greater refinement aimed towards arriving at the correct answer.

Neural network diagram

Machine learning involves allowing a computer program to learn by working through a large amount of data, which also contains the answer to a particular question e.g. data on the observations of humans who both did and did not contract a particular disease in the future. The machine learning program will build a neural network of weightings required to answer the question being posed. Then that neural network is put to work against fresh data to further refine the learning, including humans providing feedback on the program’s accuracy. Finally, armed with all that learning stored in a neural network, the program can then be applied to new, live data in order to interpret that data … it turns out, with great speed and accuracy, surpassing that of humans (1).

The above is a very simplistic description of the way neural networks operate; computer scientists involved in the use of neural networks are constantly improving their performance. Neural networks are still in relatively early days of development, and already there are many different neural network models to choose from, some better at particular problem types compared to others.

An important distinguisher in neural networks compared to “regular” programming is that the neural network can be relatively easily tuned to perform better over time, as well as “learning” from more and more data. A “regular” computer program needs to be manually reprogrammed as requirements change, again requiring someone to describe exactly what is required, and understand all the implications of that change throughout the system.

Machine learning has been applied in the last few years, with great affect, in the following areas:

  • Image / Facial recognition – ever thought about how the image search feature of Google Images, or the speedy face tag suggestions by Facebook upon upload of a photo, have become so good? A person wanted for an alleged crime in China was picked up by security cameras in about 10 minutes of the wanted person entering a concert earlier this month (3).

    City Deep Learning
  • Navigation & self-driving cars – being able to respond to incoming information, such as what other road users are doing around you, is essential for solving the problem of self-driving cars. The amount of technology involved in an autonomous car is awesome – and it needs to be given the life and death involved. “Even if it will take some time for fully autonomous vehicles to hit the market, AI is already transforming the inside of a car.” It is predicted that AI will first bring to our cars a host of so called “guardian-angel” type features to reduce the likelihood of accidents (11).
  • Speech recognition – in the last few years speech recognition (at least for native English speakers) has become very accurate, requiring very little training for a particular person. I now control my mobile phone using voice on a regular basis, because talking to my phone is much faster than typing – apparently 3 times faster according to a study by Standford University (4). Google’s latest speech-to-text system, called Tacotron 2, will add inflection to words based on punctuation to further improve understanding (5) and making it even more human-like when it is reading text to you, or responding with an answer to a question. Speech recognition in devices such as Google Home and Amazon Alexa are making simple tasks much easier. The article entitled “Amazon Echo has transformed the way I live in my apartment – here are my 19 favourite features” shows how speech recognition is being used for hands-free computer assistance in a simple home context (9). Applications of this technology are vast and life-changing for those who don’t have free hands (e.g. a surgeon at work) or are not able to type.
  • Prediction – more quickly and accurately diagnosing a current situation or predicting that a current set of information is an indicator of a future state e.g. in diagnosing disease, predicting financial market movements, identifying criminal behaviour such as insurance or banking fraud (13). The ability of a neural network to process vast amounts of data quickly, and build its own conclusions with regard to the impact of one factor on another (learn) is already helping doctors to more accurately diagnose conditions such as heart disease (12). Reducing the acceptable level of inaccuracy in medical diagnosis will lead to much better patient outcomes and reduce the cost of healthcare to our ageing population.
  • Playing games – a lot of AI research uses games to work out how to train a computer to learn. (8) From time to time I play an online version of the Settlers of Catan board game; when players leave the game (ostensibly because they have lost internet connection … usually it’s when they are losing!), you get the option to continue to game and have AI finish it on their behalf. It amuses me that I find myself, and others, immediately ganging up on the AI player. I mean, they won’t care if you make their game difficult – they’re a robot after all! It was actually the success of a computer to beat the best human players of the hardest game we play that heralded the success of artificial intelligence, and made the world take notice of its capabilities (14). “In the course of winning, [the robot] somehow taught the world completely new knowledge about perhaps the most studied and contemplated game in history.”

But, will the rise of artificial intelligence take away our jobs? Some say yes, others say no (6), but they all say that the new jobs created due to artificial intelligence will be different to current roles, and require different skills (7).

Worse than job loss, will AI cause a computer vs human war or lead to our extinction? Elon Musk is well known for his warnings against AI. It could be viewed that the pressure he has applied to the technology industry helped to lead to an agreement that the technology giants will only use AI for good (10).

I don’t believe that AI will ever result in a computer takeover of the world, because there is more that makes humans different from other animals … not just our ability to think. Reproducing just our ability to think, learn and make decisions, even in a super-human way, does not make a computer human. The capacity for machine learning / deep learning to significantly improve our lives, particularly in the areas of health and solving some of our most challenging problems, is exciting. However, I believe that it is right to be cautious; to move ahead with the knowledge that machine learning could also be used for harmful purposes. Computers can also “learn” the negative elements of humanity (15).

Business owners, innovators and leaders should consider how machine learning might be harnessed for your organisation in order to provide better value, predict more accurately, respond more quickly, or make break-throughs in knowledge in your problem domain. Let’s harness artificial intelligence for good! Read more about “How Business (big and small) can Harness Artificial Intelligence“.

References:
(1) https://www.forbes.com/sites/bernardmarr/2016/12/08/what-is-the-difference-between-deep-learning-machine-learning-and-ai/#4cc961d726cf
(2) https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence
(3) http://www.abc.net.au/news/2018-04-17/chinese-man-caught-by-facial-recognition-arrested-at-concert/9668608
(4) http://hci.stanford.edu/research/speech/index.html
(5) https://qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/
(6) http://www.abc.net.au/news/2017-08-09/artificial-intelligence-automation-jobs-of-the-future/8786962
(7) http://www.digitalistmag.com/iot/2017/11/29/artificial-intelligence-future-of-jobs-05585290
(8) https://www.businessinsider.com.au/qbert-artificial-intelligence-machine-learning-2018-2
(9) https://www.businessinsider.com.au/amazon-echo-features-tips-tricks-2018-2
(10) https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x
(11) http://knowledge.wharton.upenn.edu/article/ai-tipping-scales-development-self-driving-cars/
(12) https://www.telegraph.co.uk/news/2018/01/03/artificial-intelligence-diagnose-heart-disease/
(13) http://bigdata-madesimple.com/artificial-intelligence-influencing-financial-markets/
(14) https://deepmind.com/research/alphago/
(15) https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

While you are waiting for a human, try me!

February 13th, 2018 by Heather Maloney

In the last year or so, due to the increased use of online chat, including Facebook Messenger and Slack, chatbots have become much more common. Chatbots are computer programs which interact with a customer in response to information he/she types into a chat tool / instant message program. The increased use of chatbots has lead to maturing of the tools to create chatbots, providing an opportunity for your business to utilise bots not just to provide faster help for customers with problems, but also to support sales enquiries.
chatbotblog

Why would a customer be willing to interact with a bot?

The prime benefit for a customer is to get their problem solved or question answered more quickly. No matter how many people are added to a customer service team, there will always be times of peak enquiry where you have to wait to talk to a human.

Why wouldn’t a customer want to use a bot?

The most likely reason people would rather talk to a human is because they feel that it is harder to get their problem across to the computer, and because of that, it’s a more frustrating process and less likely to result in a satisfactory answer compared to talking to a person, whether by chat or by voice.

What to do?

To overcome the hesitation of people to use chat bots, several strategies have been used in the past including:

  1. Forcing users to first use the bot – Making the first interaction/s or all interactions in your online chat tool orchestrated by the bot, and not accessible by a human, with the option after a period of time attempting to solve the problem, pointing the customer to way to get to other support options. A similar strategy was used in the past with knowledge base functionality, whereby a person submitting a support request via a form was directed to first search the knowledge base, and only then after searching, were they given the option to submit a question.
  2. Pretending to be human – giving the bot human qualities, like attempting personality and humour, and giving them a human-like avatar or photo.

Neither of these strategies are very successful. People know when they are talking to a person, so trying to pretend otherwise can be felt as offensive to the intellect. Forcing people to use a bot first, again adds to the frustration of getting a result.

I believe that both chat and bots have their place in the provision of customer service. If I can get further information about a product I am considering via online chat, I am more likely to buy. It’s great knowing that I can just jump onto a chat within a website or web application, ask a question, and get an answer without having to dial up, sit in a telephone queue, and finally get through to a person. If on the way to getting my chat question answered, a bot steps in to try and help me get my solution more quickly, then I’m fine with that too, as long as I can differentiate the two types of assistance, and I can choose to ignore the advice of the bot and still get to chat to a real person, even if that takes longer.

Chatbots are great to help organisations provide 24/7 support, attempting to answer the question when an operator isn’t available, but allowing a person to take over the enquiry when the next support shift starts.

Steps to Creating a Useful Bot

When a human provides customer service the first step is to understand the enquirer’s problem. If they have heard the problem before, a proficient customer service operator can very quickly rectify the problem or provide the right guidance to the customer. Understanding the problem, of course, relies on the customer describing the problem in a way that’s understandable by the customer service person. And therein lies the difficulty. People describe the same problem in a myriad ways because they don’t always understand the nature of the problem themselves.

Take someone who has forgotten their password as an example. Whilst you may think most people would understand how to describe that in the fewest number of words, those who are less technically savvy might not realise that’s their problem. They may instead report that as “I can no longer login to your system”. In such a situation, a human being knows to quickly ask questions based on the statements made by the customer, and will glean clues from their tone and manner and the way they say those words. The customer service person could ask “okay, did you type in your username and password?” or they might say “when was the last time you logged in?” The answers provided by the customer will allow the support person to discover the root cause of the caller’s issue. If a bot is to determine the cause of a customer’s problem and the best solution, it needs to be able to interact with the customer and know the right questions to ask based on the way that a person communicates the problem. This is the challenge of programming a bot.

Step 1: Choose the Most Common Problems

Because it is challenging to program around the issues that people have, the first step when creating a new bot is to choose the most common problems or most frequently asked questions, that your customer service team currently deal with. Issues such as forgetting your password are unlikely to be the most common problems reported to your team, as such issues are likely already made easy for the customer to solve on their own. However, there will be issues that occur over and over, or questions raised on a regular basis, that you will be able to identify and solve, even if that is by pointing the customer to the right help article. Look for common categories of problems, as well specific issues and questions that re-occur.

Step 2: Document how people describe the problem

In order to program a bot to assist customers with a particular problem, you need to identify all the common ways that a person may describe the same problem. That’s when they actually know what the problem is. For example for those who realise that they have forgotten their password, there are still many different ways that a person may say that. They may say “I don’t know my password”. Or “I have forgotten my password”. Or “I have forgotten my login details”. Or “I don’t know what my password is”. Or “I need a new password”. Or “I have lost my password”. Or “Please reset my password”. These may not sound that different to a human, but to a simple computer program, they have similarities but they are all different.

You are likely to find that the one statement by a customer could actually require different solutions depending on their answers to other questions. Back to our password problem, if a user says “I can’t login” then the next question might be “Have you forgotten your password?” if the answer is no, then the next question might be “Have you forgotten your username?” and may lead to yet further questions. If the answer is yes, then the answer may have reached the end of the logical decision tree, ready for the solution.

As you document the questions that may be asked in order to discover the actual problem, you are likely to identify other questions and problems that you can solve along the way. You will need to decide how much time you will spend on these other problems based on how common they are, but at the same time, you don’t want to leave the customer hanging.

Step 3: Determine how the bot will direct the customer in order to solve the problem, or answer the question

Once you’ve worked out all the different ways a support request could be phrased by a customer and the questions you need to ask to get to the root of the problem, then you need to work out how the bot should respond. In the case of a password reset being required you may then program the bot to provide the password reset link. A password reset is a very simple problem to solve. The most common problem for you maybe something much more complicated.

The solution could be one or more of: providing a link to a help article which explains how to carry out a task, suggesting a list of possible products that meet the customer’s requirements, providing a link to a step by step screen overlay guiding the user through solving a particular problem, or staying in the chat in order to wait for a human to help.

Step 4: Test

Having programmed your bot to handle some common problems, it is now really important to test it with customers. You will gather much information about how useful it is by looking at your customer’s interaction with it.

If you give your customers the ability to rate the support they received or the help article you delivered to them, they actually may not choose to rate you, but you should still be able to see if they then needed to ask another question, or the same question in a different way.

Step 5: Refine and Extend

You should expect to continue to build on the list of questions or ways that a customer can ask about a particular problem. Over time you will also find the problems that are coming out the other end and not solved by the bot become your new most common problems. As I mentioned not all customers will be comfortable chatting, or chatting with a bot, so multiple support options should be provided.

Where can I place my bot?

Chatbots can be added into your mobile app, web application or website, but also into your Facebook page, Slack and other chat tools that are being used by your business. Where you will place your bot will help you select the tools you use to build it.

If you need help working through the process of creating a bot for your organisation, we would be very pleased to help.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Online Chat for Collaboration and Team Work

February 9th, 2018 by Ammu Nair

slack-logo-small
Despite threading and advanced searches added to newer versions of email clients, it is becoming increasingly difficult to keep track of email threads that are a mile deep, and segregate discussions based on projects.

The implementation of Slack into Contactpoint has helped us to solve this problem by providing a way to organise conversations into what are called “channels”. You can create ‘open channels’ to post company wide announcements, which are available to all members of the company. More confidential or private messages that needs to be exchanged between team members of a particular project can be carried out in ‘private channels’ – this means Slack allows us to create channels for different projects, topics or conversations between certain team members of our organisation. Direct messages offered by Slack, are just quicker, shorter and better alternatives to email messages. We can get a quick message to each other without having to write on sticky notes, or craft a properly structured email.

Slack has also made it quite simple to search for that link to a great resource that someone in your team posted 3 weeks ago! Slack searches can be filtered – which allows you to search only in specific channels, or only in messages from a particular team member – and can be sorted.

File-sharing, one of the basic prerequisites of collaboration, is supported by Slack which allows you to share all kinds of files in a hassle free manner by drag-and-drop or from Google drive or DropBox. The shared files can also be searched through, or starred as a favourite for later / quicker reference. Paste the link for your Google Drive spreadsheets or DropBox file along with comments, and these files will be in sync and searchable instantly. That brings me to another great feature that makes Slack a one-stop platform for collaboration. Slack can be easily integrated with a large number of apps in the market – yet another way to have all your information in sync, as it allows centralised access to all your notifications. This has greatly reduced our effort and time in switching between different tools and apps, while checking notifications. Apart from Slack’s huge collection of pre-built integrations, Slack has known to be very flexible in allowing you to build your own integrations quite easily, thereby making it advantageous for large organisations as well. Imagine your IT team receiving notifications when a new Zendesk item is created, or your team receiving a notification when when your daily web traffic exceeds a certain number of visitors. Pretty cool, isn’t it?

Integration is what catapults Slack into a category all its own. The solution enables you to centralize all your notifications, from sales to tech support, social media and more, into one searchable place where your team can discuss and take action as required. We have built our own integration with Slack so that online chat enquiries from Enudge are immediately pushed to a particular Slack channel, helping the appropriate customer service staff to see the enquiries without delay, and then see the solutions in the same place.

Slack also provides you a companion – Slackbot – to remind you of a meeting, or even remind your colleague about lunch time! Whether you have a team member working remotely, or you are working alone, Slack serves as an excellent tool to increase your productivity, by allowing you to schedule reminders, meetings, or even setup video and screen-sharing calls.

With apps for Android, iOS and Windows, all the Contactpoint team are now well-connected, and our messages are with us, no matter where we are!

Slack, is being employed for similar reasons across many large corporations as well as small businesses. For example, Oracle recently introduced Slack organisation-wide. Other tools are available, and in fact we have used other online chat tools for many years prior to implementing Slack. However, with the ease of integration across platforms, and the many add-ons which can be easily installed, Slack has quickly provided many improvements for our organisation and ways of working.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Controlling my App using Voice

October 15th, 2017 by Heather Maloney

Adding voice recognition to my mobile app
In order for the apps on your smartphone to be voice controlled, they need to be specifically programmed that way.

Some of the more common voice-enabled apps you are likely to find on your smartphone are:

  • Calendar – ask your smartphone the time of your next / first appointment, on a particular day, and it will tell you the answer and automatically show your calendar appointments for that day on screen
  • Phone – tell your smartphone to call person X, or send a text message to person Y, and it will take care of these tasks, prompting you for the details as required
  • Alarm – set an alarm to go off at a particular date and time
  • Search – ask your phone to search for a topic, and it will display a clickable list of search results

Voice recognition technologies have improved significantly over the last few years, providing numerous options with regard to voice enabling mobile apps, including:

  1. The Android operating system for wearables (e.g. Galaxy watch), smart phones and tablets includes in-built voice control actions for carrying out commonly used tasks such as writing a note. It also comprises the ability for an app to include its own “intents” which listen for voice activation once the user has launched the app. Finally it includes methods for allowing the user to enter free form text for processing by your app.
  2. Google Voice Interactions API – a code library provided by Google which allows an app to be triggered via the Google Now interface – that’s what you’re using when you say ‘Okay Google’ and then say a command.
    okay-google
  3. Apple devices (iPhones, iPads, iWatch) are built on the iOS operating system. Native iOS apps are written in either Objective C or Swift (a more recent language). With the launch of iOS 10, the Swift programming language included a Speech framework to allow developers to more easily implement listen for voice commands, and manipulate voice into text for use within apps.
  4. SiriKit was released in 2016, providing a toolkit for iOS developers to add voice interaction through Siri into their iOS 10 apps.

    What-is-my-heart-rate-voice-interaction-with-mobile-app
  5. Cross platform apps need to use 3rd party libraries to interface with the native speech recognition functions.

It’s important to know that the speech of the user is processed by Apple’s servers or Google’s servers, and then returned to the mobile device, so some lag may be noticed particularly when dealing with longer bursts of voice. It may also have privacy considerations for your users.

3rd party APIs exist which are completely contained within the mobile device, meaning that the user doesn’t need to have an internet connection to use them, and the privacy issues are reduced. An example of such a 3rd party API is the CMU Sphinx – Speech Recognition Toolkit. The downside of using such a library is that you can’t avail yourself of the amazingly accurate voice recognition the large players have developed over time, including for many different languages.

Obvious apps which provide the user with significant benefit from the use of voice control include:

  • An app which improves or assists the job of a hands-on task e.g. chefs, surgeons, artists, hairdressers …
  • An app which is needed while a person is driving e.g. navigation, finding locations, dictating ideas on-the-go …
  • An app needed by a person with disability.
  • An app which involves the entry of lots of text.

We expect to see more and more support for voice in all sorts of applications in future. What would you like to be able to achieve through voice commands?

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Can voice input be added to my web form?

October 13th, 2017 by Heather Maloney

power-of-voice-newsletter
Given the recent proliferation of ads about Google Home, it’s now common knowledge that you can easily talk to electronic devices and instruct them to do things such as search the web, play your favourite tune, give you the weather forecast, call a friend, or tell you the time of your first appointment on a particular day. Google Now is the technology that enables voice control of Google and Android devices, and Siri powers voice control on Apple devices. Windows 10 provided Cortana to do the same.

When you are using a smartphone to interact with a form on a web page, then you can usually fill in a form using voice … how easy or hard that is depends on your device. On an iPhone (and an iPad) when you bring up the keyboard in a form, there’s an additional ‘microphone’ icon that you simply need to tap in order to speak your entry. If you are using an Android Samsung Galaxy phone, you can switch your entry from keyboard to voice by swiping down from the top of the screen and choosing Change Keyboard, and then choosing Google Voice … yes, that’s 3 steps :-( .

When it comes to using a PC or Mac, filling in a form usually relies on typing. Now that I am getting used to talking to electronic devices, I find myself looking for more ways that I can use my voice to control the device rather than having to type everything. Talking, even for me as a very fast touch-typist, is quicker than typing. Plus, speech control enables you to control your device when you need to be using it hands-free.

What about my web form?
In answer to the question posed by this blog article, yes! voice input can be added to your web form even when you are entering text on a PC or a Mac. To demonstrate, we’ve added a very simple voice entry capability to the enquiry form on the Contactpoint home page. Please note; this example only works in the Chrome web browser, and of course you must have a microphone on your PC or Mac in order to speak to fill out the form. To use the voice input:

  1. click or press on the microphone icon beside a field
  2. click to Allow access to the microphone (you will only need to do this the first time)
  3. talk to complete the field!

As you are speaking you will see that there’s a red recording icon pulsing in the browser tab. When you stop talking, the recording will also stop, and then what you said will appear in the box.

From a programming point of view, there are several ways to implement voice input into a web form. The example on the Contactpoint home page uses a very simple method involving Javascript and the webkitSpeechRecognition which is an API for Google Chrome, giving the browser access (after the user has specifically allowed it) to the microphone and then handling voice input very nicely. Google’s team has spent many years refining speech recognition, and the webkit gives you quick and free access to their powerful functionality.

Other Javascript libraries have been developed to enable much more sophistication in the manner in which you can use voice to interact with a web form. Annyang is a great example, whereby specific parts of your web form can have tailored voice interactions enabled so that whatever you say has context e.g. choosing from a drop down list in a form will know about the allowed options, and match the voice input with one or more of those options. Due to the additional sophistication, there’s obviously more effort involved in using this library. Another benefit is that Annyang functionality works in any web browser.

If you would like to improve the usability of your web forms by enabling speech entry, feel free to get in touch!

Handy Hints for voice entry of text:
If you speak your text message without including punctuation, paragraphs and the like, it can be a lot harder for the recipient to understand your message. But have no fear, the following list will have your test messages reading just like you typed it!
“full stop” – if you pause and then say “full stop” Google Now and Siri will type in a ‘.’
“exclamation mark” – if you say “exclamation mark” Google Now and Siri will type in a ‘!’
“question mark”- if you say “question mark” Google Now and Siri will type in a ‘?’
“new line” – if you pause and say “new line” Google Now and Siri will move the cursor down to the next line.
“comma” – if you pause and say “comma” Google Now and Siri will type in a ‘,’

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

The Importance of User Testing

May 31st, 2017 by Kaveh Saket

image-user-testing

With the imminent soft-launch of Activeperform, the Contact Point designed and built software platform for the health and fitness industry, user testing is top of mind. Of course, it’s far too late to begin user testing on the launch of your software; user testing must start at the very first mock-up of your potential product. However, once you launch your app, user testing takes on a different form. Your app is now out in the wild and being used by real people to fulfil real world tasks. There’s nothing hypothetical about it.

User testing is vitally important because app users have become very fickle … it’s so much easier now to install and integrate a new app with your other systems, so if your users aren’t delighted, they will readily move on when the next app in your space is launched.

The purpose of user testing is to ensure that the user:

  1. Can carry out the task they need to do, quickly and easily,
  2. Gets exactly the result they expect, and
  3. Enjoys carrying out the task using your app.

User testing is not a once off process. Every time you decide to add a feature to your product, you need to test again to ensure that the new feature has damaged the flow of existing features, and of course that the new feature also meets the 3 objectives stated above. Feature improvements similarly require user testing.

The earlier user testing can be done, the better. Sometimes product owners (the person charged with directing the features of an app) can believe that their customers want a certain feature, only to find that adding that feature, after spending considerable time and money to design and build, makes little or no difference to the success of the app. An example of a user interface change that actually reduced the performance of an app is the introduction of infinite scrolling into Etsy in 2012. With the benefit of hindsight, the product owner has since admitted that they could have tested their hypothesis- that introducing infinite scrolling would result in more purchases – by making smaller, quicker changes to the app and measuring.

There are now many tools available to assist with user testing, making it much more accessible without large amounts of resources (people and money). For example, within Activeperform we will be utilising Flurry to track how our users utilise the apps.

If you would like to help us out by testing the new Activeperform app, please get in touch!

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Why not offshore my app development project?

April 17th, 2017 by Heather Maloney

onshore your app developmentOkay, you’re going to think I’m bias – I own a web & mobile app development company based in Melbourne, Australia, so of course I want to discourage organisations from offshoring the development of their apps.

However, fact of the matter is that I’ve heard countless war stories of offshored developments that have gone wrong … either the whole development has been thrown in the bin due to a poor quality result, or a project that was meant to be delivered by a particular date for a specific cost has escalated in both time and cost. My organisation has been the beneficiary of such malfunctioning projects, but not before the organisation has been through months of pain and disappointment prior to arriving at my door.

Apart from the issues of getting what you actually want, in an appropriate time, and for the low cost you expect from offshoring, there’s a third concern – security of your intellectual property. How do you really know that your solution isn’t being re-used for other foreign organisations to achieve the same or similar outcomes in their local market or the global market? If you needed to pursue a competitor for theft of your IP, doing that in a foreign country is going to be exponentially more difficult than locally. The risk of reputational damage to a local provider also provides you with additional leverage if an issue arises.

So why do off-shored projects so often go wrong? Anecdotally it would seem that the following issues are the primary reasons:

  1. Communication – first and foremost, effectively communicating your requirements is best done with the person/s carrying out, or at least overseeing, the development in the same room. Offshore developers try to overcome this with business analysts in Australia preparing vast documents on the required solution, adding time and cost to the project. Because the analysts are primarily in Australia, passing on of the information usually relies on the developers reading the vast amount of output and then following it … again inefficient, and developers aren’t known for wanting to read long documents before they start coding.
    Offshore developments usually require additional management in order to manage the offshore teams and co-ordinate communication, reducing the benefit of the lower developer hourly rates.
    Agile methodologies require close proximity of the developers and the clients to be successful.
  2. Time Zone – the effect of working in different time zones almost always adds to the project timeline. Someone has to wait until the start or the end of the day to communicate with the team, and when one team is working, the other isn’t, making asking a quick question in order to keep progressing down the right path either very difficult, or adverse for the work-life balance of team members.
  3. Cultural Differences – written English is heavily subject to interpretation. Cultural differences can increase the likelihood of incorrect interpretation. Trying to achieve a solution that feels like it was built for the Australian marketplace is also less likely from an offshore team, which is why design (UI & creative) is rarely carried out offshore.

From time to time I am asked to manage an offshore team in order for a client to get the benefit of lower cost developers. I always politely decline. We are able to develop great solutions, in a timely and cost effective manner because we have our developers in the same room, can have efficient discussions and decision-making about the developments if a difficulty arises, and because our clients are also close to the developers when the need arises. We also bring to our clients many years of experience, industry knowledge and of course cultural understanding.

There are times when you can’t get the resources you need, when you need them, locally such that offshore is the best option. But perhaps you should instead consider breaking down your development to smaller chunks so that a smaller, local team can meet your requirements. Smaller developments of shorter durations are also more likely to be successful, cost effective and deliver value to your customers and organisation more rapidly.

If you require a web or mobile application to be developed, I’d love to discuss the potential opportunity with you, so don’t hesitate to get in touch.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Implications of the new Privacy Act on Email and SMS Marketing

February 13th, 2017 by Heather Maloney

privacy act changes and email and sms marketing Okay … this may seem a little dry, but hang in there; we will get to the nitty gritty as quickly as possible.

Email and SMS marketing in Australia is not only impacted by the Australian Spam Act 2003, but also the Privacy Act 1988 (as amended by the Privacy Amendment (Enhancing Privacy Protection) Act 2012). The Privacy Amendment Act came into force on the 12 March, 2014 and created a single set of Australian Privacy Principles (APPs) applying to both Australian Government agencies and the private sector, with some special situations for the medical profession. Whilst the Privacy Act does not apply to small businesses (those with an annual revenue of less than $3,000,000), it is best practice to adhere to the legislation regardless of your size.

As I see it, the most important change in the Privacy Act was more stringent disclosure about where your data can be stored, and ensuring that government agencies do not store their data offshore except in some very specific situations. NB: if you do provide your data to offshore organisations, you are responsible for ensuring that they do not breach the Australian privacy principles.

When undertaking email and SMS marketing, in order to comply with the Privacy legislation we recommend that you:

  • Use eNudge, because your data is stored on Australian servers, not off-shore and because eNudge makes it easy for people to un-subscribe (this requirement is now included in both the Privacy legislation as well as the Australian Spam Act).
  • Only store in eNudge the information that you absolutely require in order to be able to personalise your messages and analyse your campaign results.
  • Do not store or personalise on government identifiers e.g. tax file numbers and the like.
  • Document and follow your privacy policy, and have it easily accessible via your website.
  • Include a link to your privacy policy within your email message – your email footer is the best place for this.

What should be in your privacy policy?

  1. The kinds of personal information your collect & keep.
  2. How you hold it e.g. with eNudge you might say that your information is stored in a secure online database, within Australian servers, and only accessible by appropriate employees.
  3. For what purpose you collect, store, use and disclose the personal information, and most importantly, identifying where the disclosure may take place overseas including identifying the country.
  4. How a person can view & request correction of the personal information you are storing about them.
Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Design-centric Application Development

December 6th, 2016 by Kaveh Saket

Design-centric application development

A design-centric approach to application development (that’s web applications and mobile applications – is there any other sort these days??) differs from customer-centric or technology-centric approaches which have been more common of recent years. A design-centric approach focuses primarily on ensuring that the user experience is perfect – or perhaps more accurately “nearly perfect”.

There is always room for improvement – another revision, a new update – and users want continual improvement to make their life easier. User experience has been made king because research shows that organisations which focus on design significantly outperform those who don’t.

In a customer-centric approach the customer is asked what they want, and then the designer will set about delivering to their requirements. In a technology-driven approach, the technologists build the best algorithm or new solution to solve a particular problem and then look for a customer who values the technical solution. However, following a design-centric approach the designer will research the best current solutions in the problem landscape, put themselves in the customer’s shoes, and determine to provide the simplest way to achieve the desired goals. Gathering feedback on the design from a variety of potential users of different levels of expertise follows, and leads to iterative refinement until the first version is achieved. The developers – the people who turn the design into reality – are then directed by the design team to ensure that the intended outcome is achieved.

The Uber mobile app is a great example of design-centric application development, which is a significant factor in its amazing success. Anyone who has used the Uber App will agree – from being able to see where the on-approach vehicle is on the map, along with the number of minutes until it arrives continuously updated until arrival, to seeing a photo of the driver and vehicle, one press to make a call to the driver, and immediate payment upon arriving at the destination without needing to handover a credit card. I could go on and on about the ease with which you can hail an Uber, and receive a brilliant experience of private transport…

One of the challenges of current application design is dealing with content. Having little visible content is a very quick way to send users heading for the hills … imagine Instagram with no photos when you launch it, or Twitter with no tweets to read, or Facebook with no posts. However, masses of content with no simple way to navigate it, can be just as off-putting. Requiring a user to search has been the standard approach for many years. Filtering and other ways of helping the visitor to easily drill down to the content they are most interested in, have developed more recently.

At Contact Point we have been embracing SCRUM methodology across our organisation, which also readily supports a design-centric approach. Starting with our client’s goals and objectives within their particular competitive landscape, and their customers’ wants and needs, we will:

  • undertake research into common solutions to the design problem at hand,
  • brainstorm other potential approaches with trusted and experienced colleagues,
  • wire frame the potential solution, getting feedback along the way,
  • apply creative design to the wire framed solution,
  • carry out user testing of the design, iterating as necessary to refine the solution, and
  • finally develop the solution, taking care to ensure that the essence of the planned user interaction is achieved

The above steps will be undertaken for each logical entity that collectively forms the solution, at the same time ensuring consistency throughout the solution as appropriate. After the development of each component, real user testing of people across a broad range of skill levels, will then lead to further refinement. Programmatic A/B testing will allow two or more potential solutions to be tested head to head to ensure the best solution evolves.

The successful execution of a design-centric approach involves many steps, and requires an appetite for iteration, well beyond the launch of a new solution. However, the results are impressive, and for all but the simplest of tasks, likely the only way to achieve raving fans of your solution. Design-centric doesn’t mean that the customer is ignored. In fact the opposite is true with a greater focus on experience combined with needs and wants. Neither is technology ignored – utilizing the most up to date and elegant technology is also paramount to ensuring a great user experience.

What is the best user interface you have experienced from a web or mobile application?

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Subscribe to our monthly

Contact Point Email Newsletter

Each email newsletter is filled with technology updates and great ideas to help your business grow.

To subscribe, simply fill in your details below: