Meta’s CTO Andrew “Boz” Bosworth recently suggested in an interview that the company’s wrist-worn Neural Band—designed now to control its Ray-Ban Display glasses via subtle muscle gestures—could evolve down the road into a standalone wearable like a watch. Bosworth made the comment in response to questions about future iterations of the neural interface, implying that the same electromyography (EMG) technology could migrate from glasses control to more broadly wearable devices. His remarks build on Meta’s current strategy of integrating AI, AR, and novel input methods into wearable form factors. In parallel, Meta has just unveiled its new Ray-Ban Display glasses, which pair a monocular in-lens display with the Neural Band to let users view messages, captions, navigation, and more. The glasses and band will retail for $799, launching September 30 in select U.S. stores. Early hands-on reviews indicate that while the Neural Band gesture control works impressively well, the glasses themselves suffer from lag, visual discomfort from the one-eye display, and a bulky frame design. Critics see this first generation as a stepping stone rather than a final product.
Key Takeaways
– The Neural Band’s EMG gesture system is strong enough in Meta’s vision to serve as the foundation for future wearables like watches, not just glasses control.
– Meta’s new Ray-Ban Display glasses demonstrate the first full implementation of this gesture control, but the device still shows early-generation limitations in display, responsiveness, and ergonomics.
– Evolving the Neural Band into a standalone wearable would require surmounting challenges in miniaturization, battery life, usability, and compelling differentiating functions beyond what existing smartwatches offer.
In-Depth
Meta is clearly angling to make the Neural Band more than just a niche accessory for smartglasses. As currently launched, the Neural Band (an EMG wristband) reads electrical signals from tiny muscle movements in the wearer’s wrist and translates them into gestures like pinch, scroll, or twist. These gestures control a HUD display embedded in the right lens of the Ray-Ban Display glasses, enabling users to navigate AI responses, messages, maps, captions, and more. Meta’s own description highlights how the display is “there when you want it, gone when you don’t,” and positions the Neural Band as a way to “replace the touchscreens, buttons, and dials” for this wearable context.
Bosworth’s more recent suggestion that this interface might someday evolve into a watch is both ambitious and logical from Meta’s roadmap: if the EMG interface is reliable enough to control a display, then why not build a future wearable where the display and gesture interface are wholly integrated? That said, making that leap is nontrivial. A watch form factor demands constraints on power, size, heat, durability, and user expectations. The Neural Band alone currently offers about 18 hours of battery life and is rated water resistant by IPX7 — which is solid, but in line with many wearables, not transformative.
Meanwhile, first reviews already point out where the current product falls short. UploadVR’s hands-on review praises the band’s gesture accuracy (it “picked up every gesture, every time”) but criticizes the glasses for sluggish UI performance, lag, and a monocular display that causes eye strain and discomfort over longer use. The glasses themselves weigh more than typical frames and have bulkier rims to house the hardware. In short: the vision is compelling, but the hardware is pushing the limits today.
If Meta is to transition the Neural Band toward a smartwatch role, it must solve or mitigate several challenges. First, miniaturizing the required electronics and maintaining battery life will be critical. A watch must often operate for multiple days, or at least match the endurance of existing smartwatch competitors. Second, what is the unique value that would justify the transition? Smartwatches already offer health monitoring, notifications, apps, and sensors. Meta’s version would need to offer something beyond what those devices already do — perhaps deeper AI integration, more seamless gesture control, or tighter integration with AR ecosystems. Third, making the user experience feel natural is harder than it appears: gesture recognition must be both accurate and low friction, and it must co-exist with more traditional inputs like taps, touch, voice, or even future modalities such as eye tracking or neural sensing.
Bosworth and Meta seem to have their eyes on that transition, positioning the Neural Band as a flexible interface rather than a one-off accessory. The Ray-Ban Display launch gives them a real product to iterate from, and the next generations will likely pursue lighter, faster, more comfortable, and more capable versions of both the glasses and the band. If Meta can deliver a compelling cross-device ecosystem where your wrist can both control AR glasses and act as a standalone hub, it could open a new frontier in wearable computing — one that builds on earlier mistakes in AR glasses adoption but sets a new course for human-machine interaction.

