With the first wave of Google Glass out in the hands of Google I/O attendees and other early adopters, there’s been a lot of debate about the role of Glass in the future.
In the last few years, there’s been a big surge in wearable computing in the health and fitness field. And another one in smartphone apps that leverage the phone’s accelerometer, GPS, and gyroscope to bring the physical world closer to the online world.
Even casual joggers don’t think twice today about using a heart rate monitor when they’re exercising today. Fitness enthusiasts are tracking themselves 24 hours a day, sharing the data with their friends, and debating whether the Jawbone Up or the Fitbit Flex is the better monitoring tool. Using a smartphone app to get realtime turn-by-turn directions is table stakes. So why does Glass cause angst?
My colleague Christian Cantrell thinks the problem is the built-in camera, calling it “one of Glass’s biggest barriers to adoption” and I think he’s got a point.
By putting a camera right up by your face where everyone can see it, you’re raising awareness of the pervasiveness of cameras and video in ways that other forms of technology don’t. It’s not that Glass is all that much more invasive, it just feels that way.
It seems to me that a lot of the debate about the camera misses the point. Google Glass in its current form is quite likely not the shape of computing to come. The important thing about Glass is what it represents as a milepost along the path to the mainstreaming of wearable computing.
I’m eager to try out Glass at some point. But even more, I’m eagerly looking forward to what wearable computing will look like a few years from now – because I’m pretty sure it will owe a lot to Glass, even if it looks completely different.