In light of Google Glass going on sale to the public yesterday, I’ve decided to share some of my experiences with Glass over the past few months. My research group obtained several Glasses through the Explorer program, and we’ve put them through the ringer. While I realize that the Glass is a first release and isn’t meant to be a mainstream product, corporate moves like Google’s push to sell it to everyone have really demonstrated that Google thinks the product is good enough for the general public. After several months of using, hacking on, and developing for the Glass, I’m not impressed … and here’s why.
Problems with Glass
Energy Efficiency & Thermal Management
The biggest shortcoming of Glass is its battery life, which offers barely an hour of use. Several of my research group colleagues have recently published a technical report profiling Glass’s poor energy efficiency and thermal management. When performing intensive tasks, such as video chatting/recording, the Glass’s battery will fully deplete within 45 minutes. It can also reach alarming temperatures, sometimes exceeding 125ºF, which can cause mild damage to skin tissue. Clearly this is beyond the safe spectrum for extended wear, but it honestly won’t matter much because your Glass will die long before you can do anything worthwhile with it. Everyone concerned about the privacy implications of Glass applications, such as running computer vision algorithms to detect faces in front of you, can take a momentary breather — the Glass lacks both the energy and thermal capacity to execute such a task. Even Google is aware of this, hence why they’ve recently disabled video chatting using Hangouts on Glass. Given this and the inordinately high energy consumption of the display, it seems like Google designed the Glass to be not used more often than used, almost like a pretty fashion accessory more than a wearable computing device.
Here’s something even more interesting. The Glass is build upon the OMAP4430 SoC, much like the Samsung Galaxy Nexus, but the Glass disables a good portion of the specialized hardware units on the OMAP4. For example, the GPS unit is disabled, forcing the Glass to use the GPS from a nearby smartphone tethered to it over Bluetooth. Even the main ARM CPU cores are frequency-throttled. Both of these limitations point to the fact that Google was well aware of the hardware shortcomings of Glass, but decided to push it out anyway before it was ready. Solutions to thermal and energy issues like throttling the processor and disabling heterogeneous cores are simply dirty hacks that shouldn’t be present in a production device. Though, I suppose one can argue that the Glass isn’t yet a production-ready device…
Glass User Interface
Most everyone is familiar with the Glass UI, which is effectively a side-scrolling stream of simple info cards that provide updates for various categories of interests, such as your email, messages, photos, weather, etc. The problem is, the interface is much too flat. Imagine, if you will, the Microsoft Windows start bar, but with no folders and no hierarchy. To find the program you want to open, you have to painstakingly scroll through every possible program, one (or a few) at a time. This is what it feels like using the Glass UI — you must scroll through a seemingly infinite number of entries before finding that email that someone sent you just yesterday. Again, I realize this is a prototype, but this must be resolved if Google wants to see widespread adoption of the Glass as a standalone device.
Another issue with the UI is latency and responsiveness, which are two sides of the same coin. The latency of the UI is much too high sometimes, especially when another task is running in the background. In my personal use of Glass, I’ve found that the UI often freezes completely for seconds at a time, particularly when transitioning between cards. Sometimes a stock card, such as the Settings card on the far left, simply crashes repeatedly until the Glass is rebooted. This has made the already difficult task of connecting to Wi-Fi even more difficult. Also, there is sometimes a delay of over 700ms between tapping the Glass touchpad and seeing the menu options displayed. In a modern mobile device, that latency should be an order of magnitude less than what it currently is. Finally, the voice commands simply fail to work about 5-10% of the time (anecdotally), which, granted, is still pretty damn good. But for a device that almost exclusively relies on voice input as a means of user interaction, this amount of error is too high.
User input is particularly limited with the Glass. Now, I’m a reasonable guy, so I don’t expect a beautiful, functionally-perfect soft keyboard to exist just yet. However, I think it’s just a silly oversight to not include a very basic side scrolling keyboard for short text entries like passwords and contacts. Which brings me to my next point — entering information. For a wide variety of tasks, such as adding contacts, connecting to Wi-Fi, and installing applications, you must interact with the Glass through your smartphone. I still don’t understand why I have to have another device present to connect the Glass to Wi-Fi.
Overall, the design of the Glass is minimalistic and beautiful, fit for a modern world. But, why, oh why, did they put the USB port in such an awkward position? Why not on the back of the device, near the battery? Also, the power buttons are on the inside wall of the device (touching your head), which I find distracting and inconvenient when I actually want to power down the device (which is fairly often, because it dies quickly otherwise).
I see no reason why an additional battery couldn’t be added to the other side of the Glass (other than added cost), which would help balance out the weight of the Glass while increasing its usable life. It doesn’t make much sense to restrict the physical placement of the hardware to just one side of the Glass, unless their target demographic was a monocle-wearing gentleman from the popular 1930s board game.
Personally, I find the screen placement irritating. I understand the rationale in placing the screen on the outskirts of your general field of vision, but it would be very nice to have a screen that is more adjustable than the current one. For example, I would greatly appreciate the ability to move the screen up and down, centering it in my field of view when I want to focus on the screen, and then moving it out of the way when I’m finished. Augmented Reality applications (overlaying some virtual content on top of the physical world in front of you) are rendered impossible because the display is confined to a miniature window in the upper right-hand part of your field of view. This has stifled many of my research ideas as well. 😦
Developing for Glass
The Android SDK offers an excellent API that can fortunately be used on Glass. However, simply running an app designed for an Android smartphone on the Glass won’t work too well, because there is no touchscreen interface, and the Glass UI is simpler and less interactive. For example, it would be fairly challenging to play a game of Angry Birds using only the Glass’s simple 4-way directional touchpad. So, in order to provide a fluid, usable experience for your Glass app, you need to employ different UI elements not found on Android, like the Glass’s LiveCard.
From a developer’s standpoint, the Glass Development Preview Kit (GDK) is fairly straightforward, though slightly buggy. Obviously, I wouldn’t expect a fully-baked API from a developer preview release, so no issue there. However, I did find the card-based interface somewhat unintuitive to develop for. Sure, I was able to get a LiveCard app up and running within an hour, but managing that LiveCard with an underlying Android Service was trickier than I expected, and I have extensive experience with Services and Activities from the Android API.
The problem with the card-based UI is that it’s unclear to the user (me) when a card is destroyed or removed. By swiping down on a card, you can hide that card from view temporarily, and it may or may not still be visible in your stream of cards on the home feed. Even when I select the card and kill it with a custom menu option, I found that the card still shows up in my home feed. It took me almost a full day of toying with the LiveCard and underlying Service to get the behavior of the card to match my expectations. I only wish that the card lifecycle was as explicit and consistent as that of an Android Activity. Hopefully this will be resolved in future iterations of the GDK.
Despite my criticisms and complaints above, I’m actually a big fan of the whole Glass concept, even if the execution is poor so far. I’m excited about the next version of the Glass; I think most of the software issues will be corrected and the hardware will substantially improve. Google may even implement a dedicated version of Android and/or the Linux kernel for the Glass, though I doubt that will happen for a while, if at all. At the very least, we’ll see a departure from the OMAP4 SoC to a newer chip, since TI’s OMAP division is now defunct.
The feature I hope for the most is a fully adjustable screen (in all three dimensions) that can be relocated easily on a whim. I’ve had a hard time focusing on the screen during daily use; it can be difficult to see in bright light or against multi-colored/patterned backgrounds. Screen adjustability would better enable augmented reality applications and reduce eye fatigue. While it’s somewhat unlikely, I hope the fast-approaching Google I/O 2014 will bring a new iteration of the Glass with vastly improved specs. Otherwise, it may be hard to justify spending $1500 on a device that fizzled and died after one mediocre release. Google, show us what you’ve got!