(Shameless plug: interested in the intersection of GPS, GIS, wearables, and data? You should attend the MapCamp hackathon at ISITE Design — January 10-12, 2014. Register here!)
I’ve been Glassed for the better part of a month now, and I must say I feel fortunate to have an advance look at what the future of interface will no doubt look like. For all the slagging it takes in the press and blog-land for being a needless distraction and encourager of Ugly Tech Behavior, I’ve found Glass to be remarkably unobtrusive and subtle. If anything, Google may have erred too far on the side of demureness, sacrificing some utility in order to keep the interface from being a persistent annoyance. Oddly, the mere act of wearing “glasses” constantly has been the most jarring aspect of the experience for me — I’m lucky to be blessed with perfect vision, and live in 9-months-overcast Portland, where sunglasses are rarely required. So, it feels a little weird to have something on my face all the time. But I’m getting used to it.
Hardware awkwardness aside, the software experience in Glass is quite subtle and pleasant. The interface relies heavily on voice commands, which feels socially awkward in public, but I expect those commands to be replaced by gesture-based controls over time. I’ve been in a number of contexts recently where voice commands were either awkward (public) or impractical (while cycling, when Glass had a hard time hearing me over the wind noise), and this really diminishes the utility of the device. More discreet and reliable gesture-based controls would help close this considerable experience gap.
When unhindered by UX awkwardness, I’ve found the Glass experience to be absolutely magical. One of the biggest drawbacks to the mobile phone as center of the personal-data universe is that, for a typical plugged-in 21st century citizen, the sheer mass of alerts, updates, and bits of communication can be daunting, and can prompt the sort of antisocial, constant-phone-checking behavior that, ironically, many commenters seem wary of experiencing with Glass.
However, the Glass experience does a great job of staying out of the way, only popping up alerts periodically, and only, as far as I’ve seen, ones that are relevant for one reason or another. The screen also sits above the field of vision, not in the way of it, which effectively keeps it out of the way until you’re ready to glance up. This does make looking at anything in particular for more than a few seconds a little straining, but the best use of this kind of technology isn’t really the kind of immersive, attention-absorbing experience you’d get from a game or long-form text — it’s really designed for small bites of information, not whole meals.
And this, I think, is where Glass (and the many, many competitive follow-on products sure to come to market soon) will shine — in the intersection between personal relevance, tiny bits of useful data, and location-awareness. Already, I’ve seen Glass pop up historical points of interest periodically as I pass by them — which is interesting from a casual-observer-of-geographic-history perspective, but pales in comparison to the utility that will be unleashed when this kind of serendipitous, location-based info-popup is tuned and shaped by our interests, needs, social connections, and most importantly, our tolerance for being interrupted as we transit our environment. Imagine shoppers armed with real-time sale information, music lovers alerted to nearby last-minute ticket releases, and car-share users able to scan the horizon for available vehicles. Eventually, this really will feel like having superpowers.
And eventually, it will just feel normal, right? I’ve seen the future, and I say bring it on.