Perceptual computing or the ability to interact with technology through the means of motion-based gestures, voice recognition or facial tracking, has found its way into a multitude of devices over the last few years.
Samsung’s Galaxy series of smartphones makes use of technology that tracks the users’ eyes for reading ebooks, while Sony and Microsoft have incorporated motion-based gestures in their latest gaming consoles. However, during last week’s Intel CEO Summit in San Diego, California, the question was raised: is perceptual computing at an acceptable standard and what, if anything, needs to change in the industry to drive it forward.“There wasn’t any innovation in the PC industry. We have to ask ourselves what is the key technology to look at, that will change the way we interact. In terms of Intel, we wanted to capture 3D images and speech recognition is very important as communication between humans is all about emotion. So we wanted to recreate emotion for the PC,” said Intel’s Perceptual Computing Managing Director Mark Yahiro on Intel’s current efforts.
In terms of the future of Perceptual Computing from Intel, Yahiro added that wearables will emerge as a key technology.
“We will be integrating 3D cameras to capture the reality of everyday scenarios. Wearables and their sensors across the board are being changed (to reflect) the way that you react with the world – which will drive the technology forward.”
However, as technology becomes more powerful, a number of challenges will arise. “The processing power is now at a turning point. That was a challenge in the beginning, and we thought of how we take advantage of it. If you look at what is required, it is difficult to process that. Battery life is a big issue on smaller devices. For wearables, as computing power goes up, battery life goes down,” Yahiro explained.
Dan Miller, Senior Analyst and Founder of Opus Research added that things are definitely being improved on, but a lot of work still needs to be done.
“People are expecting the devices to be tools for living. We have to be looking at better speech recognition, and applications in the cloud, as speech recognition isn’t as accurate as it can be. As the technology grows, we will have more stuff to learn from on how to do things better. But it’s going to be a long process to get there. We have worked with companies in voice, but it still needs to go a long way, but there is definitely strides being made.”
Barak Berkowitz, former CEO for True Knowledge shared the same sentiments, adding that the technology is nowhere near where it could or should be.
“You have to look at the possibility of a computer to hear, see, and talk. There is a lot of innovation left to happen, there has been a lot of innovation in gesture-based motions. But the heart of the PC is what the computer knows. There needs to be a lot of innovation in what the PC knows.”
Berkowitz added that users will be looking more towards Virtual Assistants to organise their lives.
“Building a Virtual Assistant system for a PC is taxing, but it is starting to be seamless. Models for understanding the world as we see it still needs to evolve – but you will see that type of integration. It’s actually an incredibly complex environment to build, but it will come back to the user as a seamless experience.”
Talking about Apple’s Virtual Assistant Siri, Berkowitz added that users will eventually tire of Siri’s jokes while asking for directions or asking for help, and will want a system that is flawless without the gimmicks.
“Over time people want an assistant that will be doing the job and not making human mistakes. I think it will evolve, but people won’t need the jokes from Siri as they will know that they are speaking to a virtual assistant, and they want the facts without being reminded that it is only a virtual experience. It will be interesting to see how it evolves.”
Yahiro concluded that Intel is actively helping to develop a better system for Virtual Assistants. “Intel is helping developing that technology. One day users will be able to point a finger at something and simply ask the assistant to identify what they are pointing at.”
Charlie Fripp – Consumer Tech editor