AI Glasses
The category is ruled by Meta’s (FB) Ray-Ban glasses that were released in October 2023. They have sold over 2 million units since and are the standard against which competitors are marked or mark themselves. Yet, when CE brands see unit volumes in the millions it is hard to resist sticking a bunch of engineers into a facility somewhere and seeing what they come up with in a few months. While Meta basks in the glory of being the best by far in a ‘new’ category, the competition is beginning to come from other substantial players, with China’s Xiaomi (1810.HK) the latest to release its own branded AI glasses. We note that while AI glasses are a standalone category today, we expect the category to merge with AR glasses at some point in the near future as consumers will see AI glasses as the ‘low-end’ of the visual headset spectrum, with AR being the mid-tier and VR being the upper tier.
Given the success of the Meta Ray-Ban’s Xiaomi did not stray far from both the Ray-Ban look and functionality, and we already get the feel that while AI glasses are reasonably inexpensive, it’s going to be a knock-down, drag-out battle on price once the basics are established. Just like the smartphone world, where a new feature or application can sustain a line for a year or two, AI glasses and eventually AR glasses will be one and the same, although at a lower price point, and for AI glasses that become available under $300 (currently) and eventually under $200, it is easy to see how volumes can become attractive to CE brands that deal in millions of units regularly.
- Translation - They have the ability to translate conversation in real time, with the number of languages available dependent on the model or brand. The Xiaomi glasses include 10 languages including Chinese.
- Text recognition – AIs have made text recognition a much easier task then having to run documents through a physical OCR or digital documents through software to break down image-based text into actual editable text. AI systems can do this easily using the camera that is built-in.
- Object Recognition – Object recognition requires either a massive database and a matching system, or a model that has been trained on large amounts of image and video data. Again, using the camera, one can have the AI examine an object that the user is looking at and classify or describe it. Taking it further, in theory, the user can then ask the AI about the object, such as where it might be sold and in what colors.
- Voice commands and Assistants – Multiple microphones allow voice commands to be used to initiate processes or systems, such as video recording. This allows the user to initiate the command without having a free hand.
- QR Codes – The glasses can scan mobile QR codes for payments , avoiding having to take out your phone to scan QRs.
- Transcription – The glasses are able to record meetings, conferences, and conversations and create transcripts and translations.
- Video and Image Capture – Using the embedded camera, the user can capture images and video (30 fps), take video calls, or livestream.
- Audio – The glasses have a 5 microphone array that can be used for voice calls and built-in open-ear speakers in the frames so no earbuds are necessary.
- Integration – This is where things can get a bit more difficult as applications need to be able to interact with the OS and, potentially, applications on a smartphone. , so things like livestreaming might be limited to only those live streaming applications that have APIs or other necessary interfaces to work with the glass’s hardware. This is where proprietary systems can cause AI glasses users to become frustrated and be unable to use all of the features. However, as glasses become more popular the interface issue should fade as both applications and hardware modules become more standardized.





RSS Feed