Category Archives: work

Newly Glassed: Wearables, GPS, and Ambient Data

(Shameless plug: interested in the intersection of GPS, GIS, wearables, and data? You should attend the MapCamp hackathon at ISITE Design — January 10-12, 2014. Register here!)

I’ve been Glassed for the better part of a month now, and I must say I feel fortunate to have an advance look at what the future of interface will no doubt look like. For all the slagging it takes in the press and blog-land for being a needless distraction and encourager of Ugly Tech Behavior, I’ve found Glass to be remarkably unobtrusive and subtle. If anything, Google may have erred too far on the side of demureness, sacrificing some utility in order to keep the interface from being a persistent annoyance. Oddly, the mere act of wearing “glasses” constantly has been the most jarring aspect of the experience for me — I’m lucky to be blessed with perfect vision, and live in 9-months-overcast Portland, where sunglasses are rarely required. So, it feels a little weird to have something on my face all the time. But I’m getting used to it.

Hardware awkwardness aside, the software experience in Glass is quite subtle and pleasant. The interface relies heavily on voice commands, which feels socially awkward in public, but I expect those commands to be replaced by gesture-based controls over time. I’ve been in a number of contexts recently where voice commands were either awkward (public) or impractical (while cycling, when Glass had a hard time hearing me over the wind noise), and this really diminishes the utility of the device. More discreet and reliable gesture-based controls would help close this considerable experience gap.

When unhindered by UX awkwardness, I’ve found the Glass experience to be absolutely magical. One of the biggest drawbacks to the mobile phone as center of the personal-data universe is that, for a typical plugged-in 21st century citizen, the sheer mass of alerts, updates, and bits of communication can be daunting, and can prompt the sort of antisocial, constant-phone-checking behavior that, ironically, many commenters seem wary of experiencing with Glass.

However, the Glass experience does a great job of staying out of the way, only popping up alerts periodically, and only, as far as I’ve seen, ones that are relevant for one reason or another. The screen also sits above the field of vision, not in the way of it, which effectively keeps it out of the way until you’re ready to glance up. This does make looking at anything in particular for more than a few seconds a little straining, but the best use of this kind of technology isn’t really the kind of immersive, attention-absorbing experience you’d get from a game or long-form text — it’s really designed for small bites of information, not whole meals.

And this, I think, is where Glass (and the many, many competitive follow-on products sure to come to market soon) will shine — in the intersection between personal relevance, tiny bits of useful data, and location-awareness. Already, I’ve seen Glass pop up historical points of interest periodically as I pass by them — which is interesting from a casual-observer-of-geographic-history perspective, but pales in comparison to the utility that will be unleashed when this kind of serendipitous, location-based info-popup is tuned and shaped by our interests, needs, social connections, and most importantly, our tolerance for being interrupted as we transit our environment. Imagine shoppers armed with real-time sale information, music lovers alerted to nearby last-minute ticket releases, and car-share users able to scan the horizon for available vehicles. Eventually, this really will feel like having superpowers.

And eventually, it will just feel normal, right? I’ve seen the future, and I say bring it on.

 

Branching out into Palm WebOS development

Palm Pixi PlusAmong the many inconveniences that plague those among us cursed to be generalists — Shiny Object Syndrome, Incomplete-project-disease — perhaps the most troubling (or at least the most costly) is the sheer amount of gear necessary to delve into all of the areas that interest us. As a parent, I’ve all but left behind the halcyon days of multiple guitars, bicycles and pairs of skis — one of each, two tops, will have to do for now.

But as a developer, particularly a developer of mobile applications, a growing collection of smaller-than-a-computer internet-connected devices is a much more justifiable luxury. Having spent a good amount of time over the last couple of years learning to write iPhone apps, I had more or less ignored other platforms until I attended Joshua Marinacci’s excellent presentation on Palm’s WebOS at July’s Mobile Portland meetup.

Since their acquisition by HP, Palm’s developer stock has understandably risen, as we can apparently look forward to seeing WebOS grace devices of a variety of shapes and sizes in the not-too-distant future (Josh was careful not to spill any beans, but made some coy allusions to forthcoming tablet-like products from HP). So, WebOS has gone, in my book, from a niche player to a platform that deserves exploration.

I was impressed with my first real look at WebOS, but also, in particular, with the relatively free rein Palm has given to developers (compared to the somewhat more restrictive and ceremony-wrapped Apple development and deployment process). Enough so that I’ve decided to take the plunge and try writing some WebOS apps. Apparently the learning curve isn’t nearly as steep as it is with iOS, so we’ll see how quickly that translates into results of which I can be proud.

In the meantime, the gearhead in me rejoices that a new platform means an opportunity for a new toy… er… test device. I picked up a used Palm Pixi Plus, their entry-level smartphone, on eBay, and have been putting it through its paces. Not bad so far. The UI is pretty slick — and if I wasn’t so used to and enamored by iOS, I could certainly see using it day-to-day.

Bonus: I’ll offer one tip to those of you who happen to use an iPhone as your main device, but want the ability to hop your SIM around from phone to phone in order to test your apps on different devices without having to pay for more than one mobile plan. This was considerably easier before Apple started using micro-SIMs in the iPhone 4 — you could simply pop the SIM out of your iPhone, and into any other device you wanted to use to test your apps. With the form factor change, this is more difficult, but not impossible.

Pick up one of these: http://microsim-shop.com/ — about six bucks (depending on the exchange rate with the Euro on any given day), and it works like a dream. Just pop your micro-SIM into the tray, an it’ll load into a phone that takes a standard-sized SIM. It works fine in the Palm. I’ll let you know how it works in an Android phone once I get there. I don’t know how AT&T feels about the practice of SIM-hopping, but it seems to work, so there you go.

And now, to write some Palm apps… gotta come up with something more user-friendly than the earth-destroying “Death Ray” app that Josh demo’d at the Palm event…

Ninety Percent Perspiration

Disclaimer: As someone who makes his living implementing technical systems, I’m naturally biased towards the “implementation” side of the idea/implementation equation. Notwithstanding the foregoing, I also spend a fair amount of professional time ideating, and assure my readers (both of you) that I am sympathetic to both sides.

In the course of my professional life, I am presented with a great many richly-detailed and enthusiastically-presented ideas (but of course — who on earth can help but think that their child is the cutest?). Sometimes, the presentation of these ideas is preceded by the traditional Kabuki of the non-disclosure agreement (NDAs are a topic for another day… don’t get me started or I might violate one by accident…), sometimes not. Regardless of relative secrecy, the idea-presenters typically have one thing in common — they almost always think that their brainchild is unique and special, while simultaneously holding the belief (frequently without benefit or burden of technical training necessary to determine so) that the details of its heretofore imaginary implementation will amount to a list of relatively simple tasks to be executed… by someone…

In short: The idea is unique and special. The implementation, however, should be “easy”.

(I have trained myself, in the interest of decorum, to refrain from jumping in at this point in these conversations and inquiring as to why the parent of the brainchild in question has yet to implement it themselves.)

This misconception can often lead to a concomitant and proportional misconception as to the appropriate relative weighting of interest in a proposed enterprise, but that, like NDAs, is a topic for another day.

The topic for today, rather, is a pair of related rules that I would like to propose:

First, the “Math is hard” rule:

The unique specialness of a given idea varies directly with the difficulty of its implementation.

and second, the “Underpants Gnomes” rule:

The value of an idea varies directly with the amount of its implementation that has been completed.

(There is a third rule, probably more apropos of a post on NDAs — the “infinite monkeys” rule — which states that any idea good enough that you’ve started trying to implement it is also being worked on by at least two other people you’ve never met, but I’ll leave that one for another day as well).

To recapitulate my disclaimer, and armor myself against the pitchfork-toting hordes quick to label me a techie snob: I am not dismissing the concept of the value of an idea, and certainly not the value of inspiration. I’m simply applying a sensible discount to the value of a given idea, based on a couple of measurable factors: how hard is this to implement? And how much have you already built? If the answer to these questions are “trivial” and “nothing”, I contend that the value of the idea naturally approaches zero.

This is not a bad thing.

Unless it’s your idea, and you’re trying to trade it for an equity stake in something. Then it might be bad for you.

But it’s still good for the commons. As long as you don’t over-value it or hold on to it too tightly.

This was made clear and plain to me in a blog post that made me smile — participants in a recent startup weekend event came up with 999 business ideas (naturally, they’re of varying value) — and unceremoniously loosed them on the world, free for the taking, to be implemented as the reader sees fit. Or not.

Awesome. I wish I had time to do some of these.

And there, my friends, is the rub — the single most precious non-renewable resource in the universe is time. Execution, not inspiration, is the rate-limiting step, in nearly all cases.

In that vein, I present the first (of many, I hope) freely-offered no-strings-attached idea, yours for the taking. Do with it what you will. Or not. If you take this one and run with it, send me a postcard or something.

I’ve been pondering this one since i saw the following:
http://twitter.com/fixative/status/5496398396

I couldn’t agree more. Particularly since Amazon started blowing out great albums at $5 a pop, so much of my music is purely digital now. With my vinyl records and CDs, I can — sometimes — scratch the “who played that amazing part?” itch by going to the shelf. With MP3s, I’m left guessing too frequently.

Furthermore, as an admitted, unrepentant, unreconstructed music and recording snob, I frequently want to know not only who played on an album, but who produced it, who engineered it, where it was recorded, and as much of the minutia, myth, and legend surrounding its creation as I can find. The 33 1/3 book series is absolutely fantastic for this, but the obvious problem here is one of scaling — there’s no way that 33 1/3 will ever cover a significant portion of my record collection (nor should they), so these exegeses are a rare treat (particularly the volume about Paul’s Boutique… but I digress).

It seems to me that, in iTunes or the equivalent, I ought to be able to source this kind of metadata for any track playing. I mean, I’m connected to the Internet all the time, right? And this information is out there.

So, the idea — it’s pretty simple, really — a comprehensive online database of metadata regarding the personnel, recording staff, and circumstances surrounding the creation of each album and single out there. Much of this data already exists in Wikipedia, particularly for better-known albums, but it’s not necessarily structured. A “placeholder” for each album could be gleaned using the Wikipedia API, and user-generated structured data could be added, much in the same way that the Gracenote CDDB was originally created. Over time, the crowd could refine and add to this structured pool of music metadata, which would be linked to tracks and albums similarly to how CDDB track listings are linked (based on track index / track time). The CDDB could even serve as the base of records from which to start.

As this pool of data is added to, the experience of each album could become that much richer, perhaps allowing for addition of user-generated imagery.

I have no idea how anyone would make money from this. But I know it would be incredibly useful to me, and the legions of other music freaks out there.

Anyone?

Playing with your betters

I have a trio of activities to which I am, with varying degrees of skill and regularity, devoted: playing music, skiing, and computer programming. I have developed each of these skills over the years, with a few exceptions, without a formal course of study. Sure, I took a few guitar lessons, and have fond, if hazy, memories of ski school as a kid, but by and large, I’ve come by the skills I possess via self-directed study, trial and (sometimes painful) error, and, most importantly, learning from those around me.

Exponents of the Peter Principle tend to see in every skilled human an ultimate, inevitable apex of their ability to perform increasingly complex, challenging tasks. The conventional wisdom is that hierarchies tend to promote individuals one step beyond their potential (to their “level of incompetence”), and that the incompetence displayed by each employee in their Final Resting Place is a matter of predestination, determined by the ultimate limit of their potential. The perfectionist in me, the relentless self-improver, is not quite ready to accept this fatalist attitude, and I’d like to propose an alternative explanation to ol’ Peter.

Everything I’ve ever gotten better at, I’ve done so in the tutelage, shadow, or grudging tolerance, of someone (sometimes far) better at it than me. Every time I play music with someone like drum kingpin John Lamb, I learn a little more about listening, and rhythm, and when not to play. Every time I ski with a good friend of mine with whom I had the pleasure of skiing last week, I get a little bit better.

It’s not easy to accept that someone is better than you at something, but once you swallow your pride, dig in, and try to play at their level, great things can happen. I think, perhaps, that the fact that it’s easier, and more comfortable, and more secure, to try to avoid collaborating with people who could dust you in a dogfight, leads many people to short-circuit the development of their skills. It’s not so much that they’ve reached their inevitable pinnacle of competence, but that they’ve put themselves there unwittingly — by being too shy, insecure, or falsely confident in the skills they already have to accept that anyone could teach them anything.

So, I try to remind myself of this every time I plug in my guitar, strap on my skis, or fire up the keyboard alongside someone who has something to teach me — and I try to remember to be grateful. After all, the perfection of character is never complete.