Internet input/output is dominated by typing and reading/watching our smartphones. But two new devices for which voice control is central are starting to change the world. Are you ready for the wrist and personal butler revolution where your voice replaces your fingers for control?
Two new device types, where voice control is central, are exciting to explore. The smart watches’ small screen makes your fingers seem really fat, so voice control is a vital input mechanism. And the Amazon Echo represents a new type of “digital butler,” where the primary I/O mechanism is voice even though you can access the services through a smartphone app too.
Translating between text and voice started in the late 1990s and the technology has now matured to a point where it is stable and very useful. Modern technology can understand your English, even if you speak with an accent. By eliminating the need to pull up a device, the service opportunity space grows. Using your voice to control services is a more natural form of communicating than typing on a screen. Zero training is required to learn, as the voice interpreter can interpret normal talk.
Application space expansion
The typical American pulls up a phone 46 times per day. But the need to pull up the device to start using services is a restricting factor for many micro services. Voice control can circumvent these limitations. You can start giving commands in the car without reducing attention to the traffic. You can give commands when you cook or eat at home. You can unlock doors when you are carrying things in your hands. You can start controlling the growing number of devices in your home, where the need for the smartphone as a control panel only can be reduced.
In essence, you enable a variety of new micro services where the use today is restricted by the need for initiation and control from a smartphone. The ability to control media consumption services, make to-do list additions and get answers to more questions leverage these new control mechanisms from the start.
Voice control and artificial intelligence go hand in hand for these new applications. The first exciting aspect is to use an “activation word” to separate out voice control commands from general conversations. By triggering micro services with a code word – like “Alexa” – you can ensure the relevant control commands are sorted out.
Cloud-based intelligence at the back-end is improving the answer relevance and applicability to a new level. There is a distinction between voice controlled web search and AI supported answers that is visible as an important differentiator.
Predictions for the future
Voice control has developed in accuracy and relevance and the following is likely to happen in the future:
- Voice control will become an important complement to screen typing for service control.
- Voice control will be most relevant for micro services where many small daily commands can be simplified.
- The back-end of the AI development is crucial to increase the hit rate in finding a relevant answer to your ask.
- The control of many IoT applications is likely to develop with voice control, the first and foremost.
- Our trust in biometric technology will expand from finger prints to voice prints as a basis for secure authentications.
This post was originally published at the Ericsson Networked Society blog.