Meta’s Ray-Ban Display Glasses: A Familiar Experiment With New Tricks

It has become something of a trend to make almost every everyday object “smart.” Phones, TVs, refrigerators, thermostats, headphones, and now glasses all find themselves upgraded with processors, cameras, displays, and sensors. The promise is that these enhancements will improve usefulness while maintaining the product’s original function.

Smartphones and smart TVs have been widely adopted, but glasses have struggled to gain traction. Unlike phones, not everyone wears them, which makes the potential customer base less clear. Should these devices appeal mainly to people who already rely on prescription lenses, to habitual sunglass wearers, or are they simply another attempt to bring smartphone functionality to the user’s face?

Privacy concerns add to the hesitation. Cameras and microphones embedded into eyewear can leave bystanders uneasy, especially when it’s not obvious whether recording is taking place. Google Glass, launched over a decade ago, quickly earned its users the derisive nickname “glassholes” for ignoring social norms around discreet recording.

What’s Inside Meta’s Glasses

Meta’s partnership with Ray-Ban has resulted in the Ray-Ban Display smart glasses, which package a surprising amount of hardware into their chunky frames. The right lens incorporates a 600 × 600 color display, angled so that the wearer sees a floating image occupying about 20 degrees of the visual field. Alongside this are six microphones, stereo speakers, and a 12-megapixel camera.

Control comes not only from touch inputs and a companion app but also from a new neural wristband. This band uses electromyography (EMG) to read the electrical activity of wrist and hand muscles, translating subtle movements into commands. Typing via gestures is even promised, though the feature remains experimental.

Then vs. Now

Compared with 2013’s Google Glass Explorer Edition, the Ray-Ban Display glasses offer faster processors, double the onboard storage, and slightly higher display resolution, though RAM capacity remains 2 GB. The major innovation is the EMG wristband, though it comes at the cost of wearing yet another chargeable device. For anyone already sporting a smartwatch, this could mean filling up both wrists.

Despite the added hardware muscle, one persistent challenge remains: compelling use cases. Past attempts—whether Google Glass, Apple’s Vision Pro, or similar headsets—struggled to persuade consumers that glancing at a private display on their face was worth the cost and trade-offs. Meta hopes to change that by leaning on AI-powered features like live language translation captions and smart navigation overlays. Yet all of these functions are also available on ordinary smartphones, which offer larger, clearer screens.

The Social Problem

History suggests that privacy concerns won’t vanish. Smart glasses are far less obvious than phones when recording, and LED indicators or subtle signals are easy to miss. In one viral example, a salon customer felt uncomfortable when her aesthetician wore smart glasses, unsure whether her appointment was being filmed without her consent.

This discomfort mirrors the backlash to Google Glass and raises familiar questions: what happens if the smart glasses you rely on are also your only prescription lenses? Do you remove them in settings where cameras aren’t permitted? The stigma surrounding wearable cameras appears unchanged, even as the technology improves.

Other Takes on “Smart” Eyewear

Not every product in the “smart glasses” category comes with cameras. Some sunglasses automatically adjust their tint, while others function as secondary screens for laptops or gaming devices. Augmented- and mixed-reality headsets also fall under the umbrella, though their designs vary widely. These variations avoid the particular privacy baggage that camera-equipped eyewear carries.

The Bigger Question

Meta’s Ray-Ban collaboration clearly represents progress over the early days of Google Glass, but it’s still unclear whether the public will warm to the concept. The key question is whether adding a display and computer genuinely improves glasses as a product. With smartphones already serving as versatile, powerful companions, the case for putting a smaller version of those features directly in front of the eye remains difficult to make.

Whether this attempt succeeds where earlier ones failed—or simply revives the “glasshole” debate—will depend on how people balance novelty, practicality, and trust.