Google CEO Sundar Pichai believes that we are moving to an “AI-first” world. In this world, we will be interacting with personal digital assistants on a range of platforms, including through Google’s new intelligent speaker “Google Home” and other Google-powered devices.
Google’s latest personal digital assistant Google Assistant joins a group of similar technologies from Apple, Amazon and Microsoft. Apple’s Siri has been around for nearly 5 years. In that time, Siri has developed to do more things and on a wider range of platforms. Siri is now available on all of Apple’s platforms.
After 5 years, is Siri any smarter?
Unfortunately, Siri hasn’t actually got that much smarter. Apple has concentrated on its role in being able to do relatively simple tasks which require simple commands. In terms of its self-learning abilities, they have been at best, limited.
Google’s new assistant has arguably improved on Siri in terms of its ability to deal with queries that interact with search generally and software like maps. It is also better than Siri at dealing with conversational exchanges in which context of one statement is carried through to another.
To take a very simple example, asking Google Assistant “What will the temperature be tomorrow?” followed by “Will it rain?” returns the temperature forecast for tomorrow in the local area followed by a statement about the likelihood of rain tomorrow. Siri gets this wrong and answers the question “Will it rain?” with the likelihood of it raining today.
Neither Siri nor Google Assistant can deal with the type of complex questions that Samsung’s recently acquired Viv digital assistant can manage. It can answer questions of the complexity of “Will it be warmer than 70º near the Golden Gate Bridge after 5 PM the day after tomorrow?”. Neither Siri, nor Google Assistant understands the concept of a relative temperature, let alone one at a time and place in the future.
We still aren’t talking to Siri
It turns out however, that aside from the relative smartness, or otherwise, of AI assistants, most people still haven’t got used to communicating with their phone using voice. Although a recent survey found that the majority of iPhone and Android users had tried Siri and “OK Google” (Google’s precursor to Google Assistant), 70% of iPhone users and 62% of Android users never, or very rarely, used these features. Of those users who did admit to using assistants, they were used most often at home or in the car.
Part of the reticence in using voice is partly because it is a new way of using technology that people need to adapt to. This will eventually happen, especially when the technology improves so that it is more reliable in understanding spoken commands. At the same time, digital assistants like Siri, Amazon’s Alexa and Google Assistant are able to do more as they become integrated to a wider range of apps and are connected to a greater range of devices. There are things that can be done with a short spoken command that are easier to do than through a mouse and keyboard or through opening apps. Asking Siri on the Mac to “play some opera” automatically opens Apple Music, selects some opera from the music collection and plays it. There is the added benefit that it has decided what music to play.
Build your own digital assistant
If it turns out that Siri and Google Assistant aren’t doing the things you want them to do, you can always write, run, and train your own digital assistant. For a very sophisticated environment requiring development skills, there is the open source digital assistant Lucida from the University of Michigan’s Clarity Lab. This software takes various components that implement the different steps involved in turning a spoken command into a question that the other software understands. The digital assistant can be “trained” with appropriate responses like “The capital of Italy is Rome” or can be provided with a download of Wikipedia from which it will try and find the answer to a question asked.
A simpler way to build at least some of the functionality of Siri and Google Assistant is to create a “chatbot” online using a site like Pandorabots. Pandorabots uses a simple language to teach the chatbot how to respond to specific questions. This technology is the same as that used to create Mitsuku a chatbot that has won the Loebner Prize twice by being the most convincingly human-sounding chatbot questioned by the competition’s judges.
If using software that builds artificial intelligence does nothing else, it highlights the complexity and challenges of creating truly useful digital assistants and shows that there is still a long way to go before they become an everyday way of using our smart devices.
Authors: David Glance, Director of UWA Centre for Software Practice, University of Western Australia