Today was the Wimbledon tennis finals and though we didn’t have tickets to watch live, we wanted to catch the majority of it on the TV.
We were having some family over and some new people who would have a go with the Glass product. It’s been really interesting to see how different people try to work the Glass and what they think it can do, vs. what it really can do and then being disappointed by it, or surprised even.
I decided it wasn’t worth testing as we prepped the food and snacks since I already know the hands free operation doesn’t really run smoothly, so afterwards I started using the Glass primarily to see the reaction of our friends and also to observe how the behave around it.
This was a simple and fast test since we had entertaining to do anyways and I wanted to watch the final without the distraction of the internet.
This got me thinking that it’s really a device that ends up being shelved much of the time. Success in life is all about focus and with something like the Glass on your head, you’d never really be able to focus on what you are doing intently since there will always be an opportunity for Glass to disrupt your thinking.
If this is the case, then those who understand focus will succeed more as they will purposefully take off the device, in much the same way they would purposefully silence or turn their phone off when they are doing important tasks that need focus. Those who aren’t focused, will not have another device that helps them defocus even more. This is a device that might well help widen the gap.
So the new brother in law and his gf were of course curious about the device and they both tried it on. The first thing that in both (and all) cases I’ve found with people trying it out is that they expect the voice interface to be like SIRI, i.e. they think they can get away with common language and Glass will understand what they are saying.
Telling them that there are only a set number of command phrases that work is like stepping back into the 80s when speech recognition was a new thing on Windows PC and people started paying with these fixed commands. Unfortunately we all know what happened with that, no one used it.
I feel like Glass has taken me back to the 80s, their command recognition is not sophisticated for your average consumer, I don’t know how they are going to fix this for a consumer release. Apparently the use of natural language would put too much strain on the device such that it’s battery just wouldn’t last
I see this as a vital part of the UX puzzle, and without it, would cause considerable pain for the consumer and for Google themselves.
They took a few more photos and videos, but quickly lost interest. After all, besides looking at it, there really wasn’t that much to demonstrate. Yet again, another example of how the Glass is simply not useful enough right now.