Primer - Smart Glasses
To our chagrin, expectations are that the upcoming Samsung smart glasses will not have a display, making them essentially hands-free access to audio, AI, and a camera. We expect Gemini, or a subset of, will be the AI, and as noted above, the OS will likely be Android XR, with the Qualcomm Snapdragon AR1 chipset. More importantly will be the applications, as the device will compete directly with Meta’s (FB) Ray-Ban glasses, which sell for between $250 and $300, so point-to-point feature and price comparisons will be a big focus. One of the more sophisticated features of the Ray-Bans is ‘look & ask’, where whatever the camera sees the AI sees. Using that function allows the user to look at an object and ask a question about it, without having to describe the object to the AI first. We expect Samsung to have a similar function if it is to really compete with Meta, although we believe that the longer-term success of smart glasses will be greatly enhanced by a display.
Smart glasses (formerly Augmented Reality) are becoming a popular way of interfacing with features that might have normally been included in smartphones or their applications. However the feature sets for smart glasses are varied and increasingly more complex as these devices compete with each other. In order to better understand and simplify the features that are available to smart glasses users, we break down the feature sets under the broad categories of those that have a display as part of the glasses, and those that do not.
The smart glasses that do not have a display are relatively easy to understand, with early models providing only audio, either through an open-ear speaker system or through bone conduction. These devices typically connect through Wi-Fi to a smartphone that the user might keep in a pocket or travel bag, but their functionality was relatively limited and competed with wireless headphones and ear buds, a well-established niche. In order to improve the value to consumers, a camera was added, which allowed the user to capture images or video that could be played back on the user’s phone, but such devices were still considered gimmicky and not mainstream.
While AI did not appear in a true sense in mass market smart glasses until Meta’s (FB) Ray-Bans were released, a number of glasses were based on virtual assistants that were able to perform a number of basic functions, including web searches. However the current crop of smart glasses that is making its way into consumer hands is working toward far more sophisticated AI functions and that is the big selling point for smart glasses that do not have displays. The more functions that smart glasses’ feature sets have, the more value they create to justify the price, but to us, the real kick for smart glasses begins with the display which adds an additional ‘dimension’ to these devices and helps to justify a higher price.
Unfortunately, the average Joe or Jane does not want to spend hours comparing feature sets for smart glasses, and as smart glasses evolve brand will pack in more features and complexity. To illustrate, we examined a number of smart glasses and categorized them based on just a few (very simplified) hardware features. There are a number of subsets for each of these features and many more feature categories and types, but the flowchart shows how complex choosing these devices can be, and we have excluded anything relating to how the devices look, which is another major selling point for most potential users.
We started with two basic classes of smart glasses, those with displays and those without. While these seem like simple categories, there is nuance across this level, with some having electrochromatic lenses that can reduce the amount of light passing through the lens. Those with displays enter into an even more complex path with first, the type of display (Micro-OLED is currently the leader). Some displays are mini-projectors that project an image on the see-through lens or the user’s eyes, using optics like waveguides or a birdbath to mix a projected image with what the eye is seeing, and some have only one display (monocular), although the majority have two. From that point the feature set trail for glasses with displays follows the same path as those without displays, which entails what type of audio is being used (speakers or bone conduction), how the glasses connect to the smartphone, and whether the device has a camera and AI. Again this is a simplified feature set, with the full set decision tree 2x or 3x more complex.
The question that must be answered however, if smart glasses are going to survive as a CE product stand-alone category, do they provide value against cost, essentially are they able to augment the functions of a smartphone enough to justify a $200 to $300 price tag without just duplicating smartphone functions? Glasses with speakers but no camera, even with AI, are like using earbuds with your phone, Add a camera and you can take pictures and video, but more importantly you give the AI ‘eyes’. Add a display and you no longer have to listen your AI chatter away, you can read what it is saying while you are doing other things, and if the applications become more viable over time, eventually you wont need an expensive smartphone and could replace it with a small pocket computer and your smart glasses. Maybe we are getting a bit ahead of ourselves, but smart glasses (AR) have the potential to be long-term players if CE companies don’t see it as a threat to their lucrative smartphone profits. They should be able to live side-by-side.
RSS Feed